{"title":"应用形式逻辑验证增强自然语言理解","authors":"Worawan Marurngsith, Pakorn Weawsawangwong","doi":"10.1145/3316615.3316688","DOIUrl":null,"url":null,"abstract":"Inconsistencies and ambiguities of annotation can cause vagueness in the results obtained by natural language understanding (NLU). The quality of the type systems used for annotation affects the quality of annotation. To achieve highly accepted sets of annotated documents, the Fleiss' kappa score has been widely used to observe the level of agreement from annotated results, submitted by different human annotators. The challenge is that the kappa score cannot be used to validate the type systems nor to identify any incorrect annotations. Thus, we proposed an application of formal logic for validating type systems and annotations against expert rules. Experiments have been done by using four different type systems and annotation sets created by an expert and three novices. Our proposed formal logic model was used to validate the novice type systems and annotations against the expert rules. The results show that the technique could help identifying inconsistencies between expert and novice annotations, by using a model checker. The number of detected inconsistencies impacts the level of achieved F1 score. Thus, the proposed formal logic technique could be used to guide novice annotators to develop accepted type systems. This will help to enhance the performance of the generated machine learning models used by the NLU.","PeriodicalId":268392,"journal":{"name":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","volume":"19 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Applying Formal Logic Validation to Enhance Natural Language Understanding\",\"authors\":\"Worawan Marurngsith, Pakorn Weawsawangwong\",\"doi\":\"10.1145/3316615.3316688\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Inconsistencies and ambiguities of annotation can cause vagueness in the results obtained by natural language understanding (NLU). The quality of the type systems used for annotation affects the quality of annotation. To achieve highly accepted sets of annotated documents, the Fleiss' kappa score has been widely used to observe the level of agreement from annotated results, submitted by different human annotators. The challenge is that the kappa score cannot be used to validate the type systems nor to identify any incorrect annotations. Thus, we proposed an application of formal logic for validating type systems and annotations against expert rules. Experiments have been done by using four different type systems and annotation sets created by an expert and three novices. Our proposed formal logic model was used to validate the novice type systems and annotations against the expert rules. The results show that the technique could help identifying inconsistencies between expert and novice annotations, by using a model checker. The number of detected inconsistencies impacts the level of achieved F1 score. Thus, the proposed formal logic technique could be used to guide novice annotators to develop accepted type systems. This will help to enhance the performance of the generated machine learning models used by the NLU.\",\"PeriodicalId\":268392,\"journal\":{\"name\":\"Proceedings of the 2019 8th International Conference on Software and Computer Applications\",\"volume\":\"19 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-02-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 2019 8th International Conference on Software and Computer Applications\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3316615.3316688\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 2019 8th International Conference on Software and Computer Applications","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3316615.3316688","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Applying Formal Logic Validation to Enhance Natural Language Understanding
Inconsistencies and ambiguities of annotation can cause vagueness in the results obtained by natural language understanding (NLU). The quality of the type systems used for annotation affects the quality of annotation. To achieve highly accepted sets of annotated documents, the Fleiss' kappa score has been widely used to observe the level of agreement from annotated results, submitted by different human annotators. The challenge is that the kappa score cannot be used to validate the type systems nor to identify any incorrect annotations. Thus, we proposed an application of formal logic for validating type systems and annotations against expert rules. Experiments have been done by using four different type systems and annotation sets created by an expert and three novices. Our proposed formal logic model was used to validate the novice type systems and annotations against the expert rules. The results show that the technique could help identifying inconsistencies between expert and novice annotations, by using a model checker. The number of detected inconsistencies impacts the level of achieved F1 score. Thus, the proposed formal logic technique could be used to guide novice annotators to develop accepted type systems. This will help to enhance the performance of the generated machine learning models used by the NLU.