{"title":"CRAT: Advanced transformer-based deep learning algorithms in OCT image classification","authors":"Mingming Yang , Junhui Du , Ruichan Lv","doi":"10.1016/j.bspc.2025.107544","DOIUrl":null,"url":null,"abstract":"<div><h3>Objectives</h3><div>The primary retinal optical coherence tomography (OCT) images usually have speckle noise, which may lower the diagnostic accuracy. In this research, we developed a transformer-based deep learning algorithm named Class-Re-Attention Transformers (CRAT), which presented advanced performance to quickly and accurately predict possible retinal diseases and further pathological changes from easily accessible OCT images.</div></div><div><h3>Materials and methods</h3><div>In this context, a comprehensive collection of 109,371 retinal OCT images was curated. This collection encompasses 24,562 images indicative of AMD, 37,494 images representative of CNV, 11,598 images associated with DME, 8,896 images depicting drusen, and 26,821 images classified as normal. Among them, 190 images are used as the external test set, and they are from Xi ’an Ninth Hospital. CRA can enhance the learning of deep features and the integration of classification information through the synergy of Re-attention mechanism and attention-like layer. The Re-attention block helps mitigate the risk of Attention collapse, while the class-attention Layer enhances the classification performance by specifically handling the relationship between Class labels and features. This enhancement facilitates efficient diagnosis, leveraging the extracted features.</div></div><div><h3>Result</h3><div>In order to assess the performance of CRAT, the accuracy, precision and recall rate, specificity, and F1 score were used as the main index, which provide a comprehensive performance evaluation of the proposed algorithm. The results demonstrated that the average accuracy, average precision, average recall, average specificity and average F1 score of the five eye categories (AMD, CNV, DME, Drusen and Normal) perform well on the internal test dataset, which reached 94.40%, 94.42%, 94.39%, 98.60%, and 97.76%, respectively. And the results on the external test dataset are 97.33%, 96.33%, 97.08%, 99.17%, and 98.74%, respectively.</div></div><div><h3>Conclusion</h3><div>CRA block can reduce the influence of image noise on diagnostic results. The proposed method can help ophthalmologists to quickly and accurately predict the likely occurrence of retinal diseases.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":"104 ","pages":"Article 107544"},"PeriodicalIF":4.9000,"publicationDate":"2025-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Biomedical Signal Processing and Control","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1746809425000552","RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, BIOMEDICAL","Score":null,"Total":0}
引用次数: 0
Abstract
Objectives
The primary retinal optical coherence tomography (OCT) images usually have speckle noise, which may lower the diagnostic accuracy. In this research, we developed a transformer-based deep learning algorithm named Class-Re-Attention Transformers (CRAT), which presented advanced performance to quickly and accurately predict possible retinal diseases and further pathological changes from easily accessible OCT images.
Materials and methods
In this context, a comprehensive collection of 109,371 retinal OCT images was curated. This collection encompasses 24,562 images indicative of AMD, 37,494 images representative of CNV, 11,598 images associated with DME, 8,896 images depicting drusen, and 26,821 images classified as normal. Among them, 190 images are used as the external test set, and they are from Xi ’an Ninth Hospital. CRA can enhance the learning of deep features and the integration of classification information through the synergy of Re-attention mechanism and attention-like layer. The Re-attention block helps mitigate the risk of Attention collapse, while the class-attention Layer enhances the classification performance by specifically handling the relationship between Class labels and features. This enhancement facilitates efficient diagnosis, leveraging the extracted features.
Result
In order to assess the performance of CRAT, the accuracy, precision and recall rate, specificity, and F1 score were used as the main index, which provide a comprehensive performance evaluation of the proposed algorithm. The results demonstrated that the average accuracy, average precision, average recall, average specificity and average F1 score of the five eye categories (AMD, CNV, DME, Drusen and Normal) perform well on the internal test dataset, which reached 94.40%, 94.42%, 94.39%, 98.60%, and 97.76%, respectively. And the results on the external test dataset are 97.33%, 96.33%, 97.08%, 99.17%, and 98.74%, respectively.
Conclusion
CRA block can reduce the influence of image noise on diagnostic results. The proposed method can help ophthalmologists to quickly and accurately predict the likely occurrence of retinal diseases.
期刊介绍:
Biomedical Signal Processing and Control aims to provide a cross-disciplinary international forum for the interchange of information on research in the measurement and analysis of signals and images in clinical medicine and the biological sciences. Emphasis is placed on contributions dealing with the practical, applications-led research on the use of methods and devices in clinical diagnosis, patient monitoring and management.
Biomedical Signal Processing and Control reflects the main areas in which these methods are being used and developed at the interface of both engineering and clinical science. The scope of the journal is defined to include relevant review papers, technical notes, short communications and letters. Tutorial papers and special issues will also be published.