Exploring dermoscopic structures for melanoma lesions' classification.

IF 2.4 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS
Frontiers in Big Data Pub Date : 2024-03-25 eCollection Date: 2024-01-01 DOI:10.3389/fdata.2024.1366312
Fiza Saeed Malik, Muhammad Haroon Yousaf, Hassan Ahmed Sial, Serestina Viriri
{"title":"Exploring dermoscopic structures for melanoma lesions' classification.","authors":"Fiza Saeed Malik, Muhammad Haroon Yousaf, Hassan Ahmed Sial, Serestina Viriri","doi":"10.3389/fdata.2024.1366312","DOIUrl":null,"url":null,"abstract":"<p><strong>Background: </strong>Melanoma is one of the deadliest skin cancers that originate from melanocytes due to sun exposure, causing mutations. Early detection boosts the cure rate to 90%, but misclassification drops survival to 15-20%. Clinical variations challenge dermatologists in distinguishing benign nevi and melanomas. Current diagnostic methods, including visual analysis and dermoscopy, have limitations, emphasizing the need for Artificial Intelligence understanding in dermatology.</p><p><strong>Objectives: </strong>In this paper, we aim to explore dermoscopic structures for the classification of melanoma lesions. The training of AI models faces a challenge known as brittleness, where small changes in input images impact the classification. A study explored AI vulnerability in discerning melanoma from benign lesions using features of size, color, and shape. Tests with artificial and natural variations revealed a notable decline in accuracy, emphasizing the necessity for additional information, such as dermoscopic structures.</p><p><strong>Methodology: </strong>The study utilizes datasets with clinically marked dermoscopic images examined by expert clinicians. Transformers and CNN-based models are employed to classify these images based on dermoscopic structures. Classification results are validated using feature visualization. To assess model susceptibility to image variations, classifiers are evaluated on test sets with original, duplicated, and digitally modified images. Additionally, testing is done on ISIC 2016 images. The study focuses on three dermoscopic structures crucial for melanoma detection: Blue-white veil, dots/globules, and streaks.</p><p><strong>Results: </strong>In evaluating model performance, adding convolutions to Vision Transformers proves highly effective for achieving up to 98% accuracy. CNN architectures like VGG-16 and DenseNet-121 reach 50-60% accuracy, performing best with features other than dermoscopic structures. Vision Transformers without convolutions exhibit reduced accuracy on diverse test sets, revealing their brittleness. OpenAI Clip, a pre-trained model, consistently performs well across various test sets. To address brittleness, a mitigation method involving extensive data augmentation during training and 23 transformed duplicates during test time, sustains accuracy.</p><p><strong>Conclusions: </strong>This paper proposes a melanoma classification scheme utilizing three dermoscopic structures across Ph2 and Derm7pt datasets. The study addresses AI susceptibility to image variations. Despite a small dataset, future work suggests collecting more annotated datasets and automatic computation of dermoscopic structural features.</p>","PeriodicalId":52859,"journal":{"name":"Frontiers in Big Data","volume":"7 ","pages":"1366312"},"PeriodicalIF":2.4000,"publicationDate":"2024-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10999676/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in Big Data","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdata.2024.1366312","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q3","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0

Abstract

Background: Melanoma is one of the deadliest skin cancers that originate from melanocytes due to sun exposure, causing mutations. Early detection boosts the cure rate to 90%, but misclassification drops survival to 15-20%. Clinical variations challenge dermatologists in distinguishing benign nevi and melanomas. Current diagnostic methods, including visual analysis and dermoscopy, have limitations, emphasizing the need for Artificial Intelligence understanding in dermatology.

Objectives: In this paper, we aim to explore dermoscopic structures for the classification of melanoma lesions. The training of AI models faces a challenge known as brittleness, where small changes in input images impact the classification. A study explored AI vulnerability in discerning melanoma from benign lesions using features of size, color, and shape. Tests with artificial and natural variations revealed a notable decline in accuracy, emphasizing the necessity for additional information, such as dermoscopic structures.

Methodology: The study utilizes datasets with clinically marked dermoscopic images examined by expert clinicians. Transformers and CNN-based models are employed to classify these images based on dermoscopic structures. Classification results are validated using feature visualization. To assess model susceptibility to image variations, classifiers are evaluated on test sets with original, duplicated, and digitally modified images. Additionally, testing is done on ISIC 2016 images. The study focuses on three dermoscopic structures crucial for melanoma detection: Blue-white veil, dots/globules, and streaks.

Results: In evaluating model performance, adding convolutions to Vision Transformers proves highly effective for achieving up to 98% accuracy. CNN architectures like VGG-16 and DenseNet-121 reach 50-60% accuracy, performing best with features other than dermoscopic structures. Vision Transformers without convolutions exhibit reduced accuracy on diverse test sets, revealing their brittleness. OpenAI Clip, a pre-trained model, consistently performs well across various test sets. To address brittleness, a mitigation method involving extensive data augmentation during training and 23 transformed duplicates during test time, sustains accuracy.

Conclusions: This paper proposes a melanoma classification scheme utilizing three dermoscopic structures across Ph2 and Derm7pt datasets. The study addresses AI susceptibility to image variations. Despite a small dataset, future work suggests collecting more annotated datasets and automatic computation of dermoscopic structural features.

探索用于黑色素瘤病变分类的皮肤镜结构。
背景:黑色素瘤是最致命的皮肤癌之一:黑色素瘤是最致命的皮肤癌之一,它源于黑色素细胞,由于阳光照射导致突变。早期发现可将治愈率提高到 90%,但错误分类会将存活率降至 15-20%。临床变化给皮肤科医生区分良性痣和黑色素瘤带来了挑战。目前的诊断方法,包括视觉分析和皮肤镜检查,都存在局限性,这就强调了人工智能在皮肤病学中的必要性:本文旨在探索用于黑色素瘤病变分类的皮肤镜结构。人工智能模型的训练面临着一个被称为脆性的挑战,即输入图像的微小变化都会影响分类。一项研究探讨了人工智能在利用大小、颜色和形状特征辨别黑色素瘤和良性病变时的脆弱性。使用人工和自然变化进行的测试表明,准确率明显下降,这强调了额外信息(如皮肤镜结构)的必要性:该研究利用了由临床专家检查的具有临床标记的皮肤镜图像数据集。研究采用了变压器和基于 CNN 的模型,根据皮肤镜结构对这些图像进行分类。分类结果通过特征可视化进行验证。为了评估模型对图像变化的敏感性,在包含原始图像、复制图像和数字修改图像的测试集上对分类器进行了评估。此外,还在 ISIC 2016 图像上进行了测试。研究重点关注对黑色素瘤检测至关重要的三种皮肤镜结构:蓝白纱、小点/球状物和条纹:在评估模型性能时,将卷积添加到视觉变换器中被证明非常有效,准确率高达 98%。VGG-16 和 DenseNet-121 等 CNN 架构的准确率为 50-60%,在处理皮肤镜结构以外的特征时表现最佳。没有卷积的视觉变换器在各种测试集上的准确率都有所下降,暴露了它们的脆性。预训练模型 OpenAI Clip 在各种测试集上的表现始终如一。为了解决脆性问题,我们采用了一种缓解方法,包括在训练期间进行大量数据扩增,以及在测试期间进行 23 次重复转换,从而保持了准确性:本文在 Ph2 和 Derm7pt 数据集上利用三种皮肤镜结构提出了一种黑色素瘤分类方案。这项研究解决了人工智能易受图像变化影响的问题。尽管数据集较小,但未来的工作建议收集更多的注释数据集,并自动计算皮肤镜结构特征。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
5.20
自引率
3.20%
发文量
122
审稿时长
13 weeks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信