Muhammad Taimoor Khan, Sonam Yar, S. Khalid, Furqan Aziz
{"title":"在终身学习模式中发展长期依赖规则","authors":"Muhammad Taimoor Khan, Sonam Yar, S. Khalid, Furqan Aziz","doi":"10.1109/ICKEA.2016.7802999","DOIUrl":null,"url":null,"abstract":"Topic models are extensively used for text analysis to extract prominent concepts as topics in a large collection of documents about a subject domain. They are extended with different approaches to suit various application areas. Automatic knowledge-based topic models are recently introduced to specifically meet the processing needs of large-scale data having many subject domains. The model automatically learns rules across all domains and uses them to improve the results of the current domain by purposefully grouping words into topics to better represent the underlying concept. The existing models apply thresholds on evaluation criteria to learn rules; however, being automatic it may learn wrong, irrelevant or inconsistent rules as well. In this research article the proposed model learns rules and monitors their contributions towards the quality of results. As the model learns new rules, the existing rules undergo refinement and detachment procedures to retain reliable rules only. Experimental results on user reviews from Amazon.com shows improvement in the quality of topics by using fewer rules which advocates the quality of rules and help avoid performance bottleneck at high experience.","PeriodicalId":241850,"journal":{"name":"2016 IEEE International Conference on Knowledge Engineering and Applications (ICKEA)","volume":"78 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2016-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"6","resultStr":"{\"title\":\"Evolving long-term dependency rules in lifelong learning models\",\"authors\":\"Muhammad Taimoor Khan, Sonam Yar, S. Khalid, Furqan Aziz\",\"doi\":\"10.1109/ICKEA.2016.7802999\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Topic models are extensively used for text analysis to extract prominent concepts as topics in a large collection of documents about a subject domain. They are extended with different approaches to suit various application areas. Automatic knowledge-based topic models are recently introduced to specifically meet the processing needs of large-scale data having many subject domains. The model automatically learns rules across all domains and uses them to improve the results of the current domain by purposefully grouping words into topics to better represent the underlying concept. The existing models apply thresholds on evaluation criteria to learn rules; however, being automatic it may learn wrong, irrelevant or inconsistent rules as well. In this research article the proposed model learns rules and monitors their contributions towards the quality of results. As the model learns new rules, the existing rules undergo refinement and detachment procedures to retain reliable rules only. Experimental results on user reviews from Amazon.com shows improvement in the quality of topics by using fewer rules which advocates the quality of rules and help avoid performance bottleneck at high experience.\",\"PeriodicalId\":241850,\"journal\":{\"name\":\"2016 IEEE International Conference on Knowledge Engineering and Applications (ICKEA)\",\"volume\":\"78 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2016-09-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"6\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2016 IEEE International Conference on Knowledge Engineering and Applications (ICKEA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICKEA.2016.7802999\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2016 IEEE International Conference on Knowledge Engineering and Applications (ICKEA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICKEA.2016.7802999","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Evolving long-term dependency rules in lifelong learning models
Topic models are extensively used for text analysis to extract prominent concepts as topics in a large collection of documents about a subject domain. They are extended with different approaches to suit various application areas. Automatic knowledge-based topic models are recently introduced to specifically meet the processing needs of large-scale data having many subject domains. The model automatically learns rules across all domains and uses them to improve the results of the current domain by purposefully grouping words into topics to better represent the underlying concept. The existing models apply thresholds on evaluation criteria to learn rules; however, being automatic it may learn wrong, irrelevant or inconsistent rules as well. In this research article the proposed model learns rules and monitors their contributions towards the quality of results. As the model learns new rules, the existing rules undergo refinement and detachment procedures to retain reliable rules only. Experimental results on user reviews from Amazon.com shows improvement in the quality of topics by using fewer rules which advocates the quality of rules and help avoid performance bottleneck at high experience.