{"title":"Reference aware attention based medical image diagnosis","authors":"Qidan Dai, Wenhui Shen, Pike Xu, Heng Xiao, Xiao Qin","doi":"10.1117/12.2667605","DOIUrl":null,"url":null,"abstract":"Given the excellent globality and parallelism, Transformer has been widely applied to image tasks. Visual Transformers demand modeling the spatial correlations among visual tokens. However, those existing methods either only emphasize the relative position between two tokens, or only concern on their contexts. Intuitively, a rational attention distribution should hinge on both. To this end, this paper proposes Reference Aware Attention (RAA). RAA decomposes inner-tokens dependency into three intuitive factors, in which reference bias is introduced to model how a reference token attends to a region. Experimental results suggest that RAA can effectively promote the performances of visual Transformers on various medical image diagnosis tasks.","PeriodicalId":128051,"journal":{"name":"Third International Seminar on Artificial Intelligence, Networking, and Information Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Third International Seminar on Artificial Intelligence, Networking, and Information Technology","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2667605","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Given the excellent globality and parallelism, Transformer has been widely applied to image tasks. Visual Transformers demand modeling the spatial correlations among visual tokens. However, those existing methods either only emphasize the relative position between two tokens, or only concern on their contexts. Intuitively, a rational attention distribution should hinge on both. To this end, this paper proposes Reference Aware Attention (RAA). RAA decomposes inner-tokens dependency into three intuitive factors, in which reference bias is introduced to model how a reference token attends to a region. Experimental results suggest that RAA can effectively promote the performances of visual Transformers on various medical image diagnosis tasks.