{"title":"使用细心的字符和本地化的上下文表示来识别个人药物摄入的推文","authors":"Jarashanth Selvarajah, Ruwan Dharshana Nawarathna","doi":"10.3897/jucs.84130","DOIUrl":null,"url":null,"abstract":"Individuals with health anomalies often share their experiences on social media sites, such as Twitter, which yields an abundance of data on a global scale. Nowadays, social media data constitutes a leading source to build drug monitoring and surveillance systems. However, a proper assessment of such data requires discarding mentions which do not express drug-related personal health experiences. We automate this process by introducing a novel deep learning model. The model includes character-level and word-level embeddings, embedding-level attention, convolu- tional neural networks (CNN), bidirectional gated recurrent units (BiGRU), and context-aware attentions. An embedding for a word is produced by integrating both word-level and character-level embeddings using an embedding-level attention mechanism, which selects the salient features from both embeddings without expanding dimensionality. The resultant embedding is further analyzed by three CNN layers independently, where each extracts unique n-grams. BiGRUs followed by attention layers further process the outputs from each CNN layer. Besides, the resultant embedding is also encoded by a BiGRU with attention. Our model is developed to cope with the intricate attributes inherent to tweets such as vernacular texts, descriptive medical phrases, frequently misspelt words, abbreviations, short messages, and others. All these four outputs are summed and sent to a softmax classifier. We built a dataset by incorporating tweets from two benchmark datasets designed for the same objective to evaluate the performance. Our model performs substantially better than existing models, including several customized Bidirectional Encoder Representations from Transformers (BERT) models with an F1-score of 0.772.","PeriodicalId":14652,"journal":{"name":"J. Univers. Comput. Sci.","volume":"19 1","pages":"1312-1329"},"PeriodicalIF":0.0000,"publicationDate":"2022-12-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Identifying Tweets with Personal Medication Intake Mentions using Attentive Character and Localized Context Representations\",\"authors\":\"Jarashanth Selvarajah, Ruwan Dharshana Nawarathna\",\"doi\":\"10.3897/jucs.84130\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Individuals with health anomalies often share their experiences on social media sites, such as Twitter, which yields an abundance of data on a global scale. Nowadays, social media data constitutes a leading source to build drug monitoring and surveillance systems. However, a proper assessment of such data requires discarding mentions which do not express drug-related personal health experiences. We automate this process by introducing a novel deep learning model. The model includes character-level and word-level embeddings, embedding-level attention, convolu- tional neural networks (CNN), bidirectional gated recurrent units (BiGRU), and context-aware attentions. An embedding for a word is produced by integrating both word-level and character-level embeddings using an embedding-level attention mechanism, which selects the salient features from both embeddings without expanding dimensionality. The resultant embedding is further analyzed by three CNN layers independently, where each extracts unique n-grams. BiGRUs followed by attention layers further process the outputs from each CNN layer. Besides, the resultant embedding is also encoded by a BiGRU with attention. Our model is developed to cope with the intricate attributes inherent to tweets such as vernacular texts, descriptive medical phrases, frequently misspelt words, abbreviations, short messages, and others. All these four outputs are summed and sent to a softmax classifier. We built a dataset by incorporating tweets from two benchmark datasets designed for the same objective to evaluate the performance. Our model performs substantially better than existing models, including several customized Bidirectional Encoder Representations from Transformers (BERT) models with an F1-score of 0.772.\",\"PeriodicalId\":14652,\"journal\":{\"name\":\"J. Univers. Comput. Sci.\",\"volume\":\"19 1\",\"pages\":\"1312-1329\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2022-12-28\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"J. Univers. Comput. Sci.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3897/jucs.84130\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"J. Univers. Comput. Sci.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3897/jucs.84130","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Identifying Tweets with Personal Medication Intake Mentions using Attentive Character and Localized Context Representations
Individuals with health anomalies often share their experiences on social media sites, such as Twitter, which yields an abundance of data on a global scale. Nowadays, social media data constitutes a leading source to build drug monitoring and surveillance systems. However, a proper assessment of such data requires discarding mentions which do not express drug-related personal health experiences. We automate this process by introducing a novel deep learning model. The model includes character-level and word-level embeddings, embedding-level attention, convolu- tional neural networks (CNN), bidirectional gated recurrent units (BiGRU), and context-aware attentions. An embedding for a word is produced by integrating both word-level and character-level embeddings using an embedding-level attention mechanism, which selects the salient features from both embeddings without expanding dimensionality. The resultant embedding is further analyzed by three CNN layers independently, where each extracts unique n-grams. BiGRUs followed by attention layers further process the outputs from each CNN layer. Besides, the resultant embedding is also encoded by a BiGRU with attention. Our model is developed to cope with the intricate attributes inherent to tweets such as vernacular texts, descriptive medical phrases, frequently misspelt words, abbreviations, short messages, and others. All these four outputs are summed and sent to a softmax classifier. We built a dataset by incorporating tweets from two benchmark datasets designed for the same objective to evaluate the performance. Our model performs substantially better than existing models, including several customized Bidirectional Encoder Representations from Transformers (BERT) models with an F1-score of 0.772.