Huey-Ing Liu, Meng-Wei Chen, Wei-Chun Kao, Yao-Wen Yeh, Cheng Yang
{"title":"GSAP: A Hybrid GRU and Self-Attention Based Model for Dual Medical NLP Tasks","authors":"Huey-Ing Liu, Meng-Wei Chen, Wei-Chun Kao, Yao-Wen Yeh, Cheng Yang","doi":"10.1109/KST53302.2022.9727234","DOIUrl":null,"url":null,"abstract":"This paper proposes a hybrid Gated Recurrent Unit (GRU) and Self-Attention based model, named GSAP, for dual medical related NLP tasks. GSAP stacks three famous neural network units: GRU, self-attention and pooling of Convolutional Neural Network (CNN) to improve the accuracy. In the GSAP, GRU is first adopted to comprehend sentences. Second, the Self-Attention layer helps the model to focus on key points of inputs. Finally, the pooling layer eases the outfitting and upgrades the system accuracy. The proposed GSAP is applied to two different medical NLP tasks: medical QA matching and smoking status classification and demonstrates outstanding results. In the smoking prediction, GSAP obtains an accuracy around 80%. Regarding to the medical QA matching task, GSAP upgrades the accuracy up to around 90%.","PeriodicalId":433638,"journal":{"name":"2022 14th International Conference on Knowledge and Smart Technology (KST)","volume":"223 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 14th International Conference on Knowledge and Smart Technology (KST)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/KST53302.2022.9727234","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
This paper proposes a hybrid Gated Recurrent Unit (GRU) and Self-Attention based model, named GSAP, for dual medical related NLP tasks. GSAP stacks three famous neural network units: GRU, self-attention and pooling of Convolutional Neural Network (CNN) to improve the accuracy. In the GSAP, GRU is first adopted to comprehend sentences. Second, the Self-Attention layer helps the model to focus on key points of inputs. Finally, the pooling layer eases the outfitting and upgrades the system accuracy. The proposed GSAP is applied to two different medical NLP tasks: medical QA matching and smoking status classification and demonstrates outstanding results. In the smoking prediction, GSAP obtains an accuracy around 80%. Regarding to the medical QA matching task, GSAP upgrades the accuracy up to around 90%.