A. Oltramari, Jonathan M Francis, Filip Ilievski, Kaixin Ma, Roshanak Mirzaee
{"title":"常识性问答的可推广神经符号系统","authors":"A. Oltramari, Jonathan M Francis, Filip Ilievski, Kaixin Ma, Roshanak Mirzaee","doi":"10.3233/FAIA210360","DOIUrl":null,"url":null,"abstract":"This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks. Different methods for integrating neural language models and knowledge graphs are discussed. The situations in which this combination is most appropriate are characterized, including quantitative evaluation and qualitative error analysis on a variety of commonsense question answering benchmark datasets.","PeriodicalId":250200,"journal":{"name":"Neuro-Symbolic Artificial Intelligence","volume":"29 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Generalizable Neuro-symbolic Systems for Commonsense Question Answering\",\"authors\":\"A. Oltramari, Jonathan M Francis, Filip Ilievski, Kaixin Ma, Roshanak Mirzaee\",\"doi\":\"10.3233/FAIA210360\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks. Different methods for integrating neural language models and knowledge graphs are discussed. The situations in which this combination is most appropriate are characterized, including quantitative evaluation and qualitative error analysis on a variety of commonsense question answering benchmark datasets.\",\"PeriodicalId\":250200,\"journal\":{\"name\":\"Neuro-Symbolic Artificial Intelligence\",\"volume\":\"29 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-12-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Neuro-Symbolic Artificial Intelligence\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3233/FAIA210360\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Neuro-Symbolic Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3233/FAIA210360","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Generalizable Neuro-symbolic Systems for Commonsense Question Answering
This chapter illustrates how suitable neuro-symbolic models for language understanding can enable domain generalizability and robustness in downstream tasks. Different methods for integrating neural language models and knowledge graphs are discussed. The situations in which this combination is most appropriate are characterized, including quantitative evaluation and qualitative error analysis on a variety of commonsense question answering benchmark datasets.