{"title":"利用自适应反射增强预训练深度学习模型","authors":"Xinzhi Wang, Mengyue Li, Hang Yu, Chenyang Wang, Vijayan Sugumaran, Hui Zhang","doi":"10.1007/s12559-024-10348-3","DOIUrl":null,"url":null,"abstract":"<p>In the text mining area, prevalent deep learning models primarily focus on mapping input features to result of predicted outputs, which exhibit a deficiency in self-dialectical thinking process. Inspired by self-reflective mechanisms in human cognition, we propose a hypothesis that existing models emulate decision-making processes and automatically rectify erroneous predictions. The Self-adaptive Reflection Enhanced pre-trained deep learning Model (S-REM) is introduced to validate our hypotheses and to determine the types of knowledge that warrant reproduction. Based on the pretrained-model, S-REM introduces the local explanation for pseudo-label and the global explanation for all labels as the explanation knowledge. The keyword knowledge from TF-IDF model is also integrated to form a reflection knowledge. Based on the key explanation features, the pretrained-model reflects on the initial decision by two reflection methods and optimizes the prediction of deep learning models. Experiments with local and global reflection variants of S-REM on two text mining tasks across four datasets, encompassing three public and one private dataset were conducted. The outcomes demonstrate the efficacy of our method in improving the accuracy of state-of-the-art deep learning models. Furthermore, the method can serve as a foundational step towards developing explainable through integration with various deep learning models.</p>","PeriodicalId":51243,"journal":{"name":"Cognitive Computation","volume":null,"pages":null},"PeriodicalIF":4.3000,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Enhancing Pre-trained Deep Learning Model with Self-Adaptive Reflection\",\"authors\":\"Xinzhi Wang, Mengyue Li, Hang Yu, Chenyang Wang, Vijayan Sugumaran, Hui Zhang\",\"doi\":\"10.1007/s12559-024-10348-3\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>In the text mining area, prevalent deep learning models primarily focus on mapping input features to result of predicted outputs, which exhibit a deficiency in self-dialectical thinking process. Inspired by self-reflective mechanisms in human cognition, we propose a hypothesis that existing models emulate decision-making processes and automatically rectify erroneous predictions. The Self-adaptive Reflection Enhanced pre-trained deep learning Model (S-REM) is introduced to validate our hypotheses and to determine the types of knowledge that warrant reproduction. Based on the pretrained-model, S-REM introduces the local explanation for pseudo-label and the global explanation for all labels as the explanation knowledge. The keyword knowledge from TF-IDF model is also integrated to form a reflection knowledge. Based on the key explanation features, the pretrained-model reflects on the initial decision by two reflection methods and optimizes the prediction of deep learning models. Experiments with local and global reflection variants of S-REM on two text mining tasks across four datasets, encompassing three public and one private dataset were conducted. The outcomes demonstrate the efficacy of our method in improving the accuracy of state-of-the-art deep learning models. Furthermore, the method can serve as a foundational step towards developing explainable through integration with various deep learning models.</p>\",\"PeriodicalId\":51243,\"journal\":{\"name\":\"Cognitive Computation\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.3000,\"publicationDate\":\"2024-09-03\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Cognitive Computation\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s12559-024-10348-3\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Cognitive Computation","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s12559-024-10348-3","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
Enhancing Pre-trained Deep Learning Model with Self-Adaptive Reflection
In the text mining area, prevalent deep learning models primarily focus on mapping input features to result of predicted outputs, which exhibit a deficiency in self-dialectical thinking process. Inspired by self-reflective mechanisms in human cognition, we propose a hypothesis that existing models emulate decision-making processes and automatically rectify erroneous predictions. The Self-adaptive Reflection Enhanced pre-trained deep learning Model (S-REM) is introduced to validate our hypotheses and to determine the types of knowledge that warrant reproduction. Based on the pretrained-model, S-REM introduces the local explanation for pseudo-label and the global explanation for all labels as the explanation knowledge. The keyword knowledge from TF-IDF model is also integrated to form a reflection knowledge. Based on the key explanation features, the pretrained-model reflects on the initial decision by two reflection methods and optimizes the prediction of deep learning models. Experiments with local and global reflection variants of S-REM on two text mining tasks across four datasets, encompassing three public and one private dataset were conducted. The outcomes demonstrate the efficacy of our method in improving the accuracy of state-of-the-art deep learning models. Furthermore, the method can serve as a foundational step towards developing explainable through integration with various deep learning models.
期刊介绍:
Cognitive Computation is an international, peer-reviewed, interdisciplinary journal that publishes cutting-edge articles describing original basic and applied work involving biologically-inspired computational accounts of all aspects of natural and artificial cognitive systems. It provides a new platform for the dissemination of research, current practices and future trends in the emerging discipline of cognitive computation that bridges the gap between life sciences, social sciences, engineering, physical and mathematical sciences, and humanities.