基于皮肤镜的无创数字系统中使用视觉变换器自动分析的皮肤癌分割和分类。

IF 3.3 Q2 ENGINEERING, BIOMEDICAL
International Journal of Biomedical Imaging Pub Date : 2024-02-03 eCollection Date: 2024-01-01 DOI:10.1155/2024/3022192
Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder
{"title":"基于皮肤镜的无创数字系统中使用视觉变换器自动分析的皮肤癌分割和分类。","authors":"Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder","doi":"10.1155/2024/3022192","DOIUrl":null,"url":null,"abstract":"<p><p>Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":"2024 ","pages":"3022192"},"PeriodicalIF":3.3000,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10858797/pdf/","citationCount":"0","resultStr":"{\"title\":\"Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System.\",\"authors\":\"Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder\",\"doi\":\"10.1155/2024/3022192\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><p>Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.</p>\",\"PeriodicalId\":47063,\"journal\":{\"name\":\"International Journal of Biomedical Imaging\",\"volume\":\"2024 \",\"pages\":\"3022192\"},\"PeriodicalIF\":3.3000,\"publicationDate\":\"2024-02-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10858797/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"International Journal of Biomedical Imaging\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1155/2024/3022192\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q2\",\"JCRName\":\"ENGINEERING, BIOMEDICAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Biomedical Imaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1155/2024/3022192","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0

摘要

皮肤癌是全球关注的重大健康问题,早期准确诊断对改善患者预后起着至关重要的作用。近年来,深度学习模型在包括图像分类在内的各种计算机视觉任务中取得了显著成功。在这项研究中,我们介绍了一种利用视觉转换器进行皮肤癌分类的方法,这是一种最先进的深度学习架构,在各种图像分析任务中都表现出了卓越的性能。这项研究利用了 HAM10000 数据集;这是一个公开可用的数据集,包含 10015 张皮肤病变图像,分为良性(6705 张)和恶性(3310 张)两类。该数据集由使用皮肤镜拍摄的高分辨率图像组成,并由皮肤科专家仔细标注。为了增强模型的鲁棒性和通用性,我们采用了归一化和增强等预处理技术。视觉转换器架构适用于皮肤癌分类任务。该模型利用自我注意机制捕捉图像中错综复杂的空间依赖关系和长程依赖关系,从而有效地学习相关特征,实现准确分类。采用 "任意分割模型"(SAM)对图像中的癌症区域进行分割,IOU 达到 96.01%,Dice 系数达到 98.14%,然后使用视觉转换器架构将各种预训练模型用于分类。为了评估我们方法的性能,我们进行了广泛的实验和评估。结果表明,在皮肤癌分类中,视觉转换器模型总体上优于传统的深度学习架构,但也有一些例外。在对 ViT-Google、ViT-MAE、ViT-ResNet50、ViT-VAN、ViT-BEiT 和 ViT-DiT 六种不同模型进行实验后,我们发现,使用谷歌的 ViT patch-32 模型,ML 方法在测试数据集上实现了 96.15% 的准确率和较低的假阴性率,展示了其作为辅助皮肤科医生诊断皮肤癌的有效工具的潜力。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System.

Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
12.00
自引率
0.00%
发文量
11
审稿时长
20 weeks
期刊介绍: The International Journal of Biomedical Imaging is managed by a board of editors comprising internationally renowned active researchers. The journal is freely accessible online and also offered for purchase in print format. It employs a web-based review system to ensure swift turnaround times while maintaining high standards. In addition to regular issues, special issues are organized by guest editors. The subject areas covered include (but are not limited to): Digital radiography and tomosynthesis X-ray computed tomography (CT) Magnetic resonance imaging (MRI) Single photon emission computed tomography (SPECT) Positron emission tomography (PET) Ultrasound imaging Diffuse optical tomography, coherence, fluorescence, bioluminescence tomography, impedance tomography Neutron imaging for biomedical applications Magnetic and optical spectroscopy, and optical biopsy Optical, electron, scanning tunneling/atomic force microscopy Small animal imaging Functional, cellular, and molecular imaging Imaging assays for screening and molecular analysis Microarray image analysis and bioinformatics Emerging biomedical imaging techniques Imaging modality fusion Biomedical imaging instrumentation Biomedical image processing, pattern recognition, and analysis Biomedical image visualization, compression, transmission, and storage Imaging and modeling related to systems biology and systems biomedicine Applied mathematics, applied physics, and chemistry related to biomedical imaging Grid-enabling technology for biomedical imaging and informatics
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信