{"title":"A NOVEL TRANSFORMER METHOD PRETRAINED WITH MASKED AUTOENCODERS AND FRACTAL DIMENSION FOR DIABETIC RETINOPATHY CLASSIFICATION","authors":"YAOMING YANG, ZHAO ZHA, CHENNAN ZHOU, LIDA ZHANG, SHUXIA QIU, PENG XU","doi":"10.1142/s0218348x24500609","DOIUrl":null,"url":null,"abstract":"<p>Diabetic retinopathy (DR) is one of the leading causes of blindness in a significant portion of the working population, and its damage on vision is irreversible. Therefore, rapid diagnosis on DR is crucial for saving the patient’s eyesight. Since Transformer shows superior performance in the field of computer vision compared with Convolutional Neural Networks (CNNs), it has been proposed and applied in computer aided diagnosis of DR. However, a large number of images should be used for training due to the lack of inductive bias in Transformers. It has been demonstrated that the retinal vessels follow self-similar fractal scaling law, and the fractal dimension of DR patients shows an evident difference from that of normal people. Based on this, the fractal dimension is introduced as a prior into Transformers to mitigate the adverse influence of lack of inductive bias on model performance. A new Transformer method pretrained with Masked Autoencoders and fractal dimension (MAEFD) is developed and proposed in this paper. The experiments on the APTOS dataset show that the classification performance for DR by the proposed MAEFD can be substantially improved. Additionally, the present model pretrained with 100,000 retinal images outperforms that pretrained with 1 million natural images in terms of DR classification performance.</p>","PeriodicalId":501262,"journal":{"name":"Fractals","volume":"46 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Fractals","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1142/s0218348x24500609","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Diabetic retinopathy (DR) is one of the leading causes of blindness in a significant portion of the working population, and its damage on vision is irreversible. Therefore, rapid diagnosis on DR is crucial for saving the patient’s eyesight. Since Transformer shows superior performance in the field of computer vision compared with Convolutional Neural Networks (CNNs), it has been proposed and applied in computer aided diagnosis of DR. However, a large number of images should be used for training due to the lack of inductive bias in Transformers. It has been demonstrated that the retinal vessels follow self-similar fractal scaling law, and the fractal dimension of DR patients shows an evident difference from that of normal people. Based on this, the fractal dimension is introduced as a prior into Transformers to mitigate the adverse influence of lack of inductive bias on model performance. A new Transformer method pretrained with Masked Autoencoders and fractal dimension (MAEFD) is developed and proposed in this paper. The experiments on the APTOS dataset show that the classification performance for DR by the proposed MAEFD can be substantially improved. Additionally, the present model pretrained with 100,000 retinal images outperforms that pretrained with 1 million natural images in terms of DR classification performance.