{"title":"Detecting fake news using machine learning and reasoning in Description Logics","authors":"Adrian Groza","doi":"10.1109/COMPENG50184.2022.9905431","DOIUrl":null,"url":null,"abstract":"Reasoning in Description Logics (DLs) can detect inconsistencies between trusted knowledge and not trusted sources. The proposed method is exemplified on fake news for Covid19. Machine learning is used to generate DL axioms from positive and negative examples using tools such as DL-Learner. The resulted knowledge graph formalised in DL is merged with the trusted ontologies on Covid-19. Reasoning in DL is then performed with the Racer engine, which is responsible to detect inconsistencies within the ontology. When detecting inconsistencies, a \"red flag\" is raised to signal possible fake news and the corresponding counterspeech is generated.","PeriodicalId":211056,"journal":{"name":"2022 IEEE Workshop on Complexity in Engineering (COMPENG)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE Workshop on Complexity in Engineering (COMPENG)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COMPENG50184.2022.9905431","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Reasoning in Description Logics (DLs) can detect inconsistencies between trusted knowledge and not trusted sources. The proposed method is exemplified on fake news for Covid19. Machine learning is used to generate DL axioms from positive and negative examples using tools such as DL-Learner. The resulted knowledge graph formalised in DL is merged with the trusted ontologies on Covid-19. Reasoning in DL is then performed with the Racer engine, which is responsible to detect inconsistencies within the ontology. When detecting inconsistencies, a "red flag" is raised to signal possible fake news and the corresponding counterspeech is generated.