{"title":"一种可解释的人工智能-人类合作专利新颖性研究模型","authors":"Hyejin Jang , Byungun Yoon","doi":"10.1016/j.engappai.2025.110984","DOIUrl":null,"url":null,"abstract":"<div><div>With the accumulation of technology-related big data, including the patent database, existing studies have proposed a framework for patent analysis using natural language processing models. Artificial intelligence (AI) applications require human experience and insight based on the understanding of complex environments and uncertainties and model predictive performance. However, existing research has focused on applying big data and developing automated processes. Actual user understanding and the consideration of model usability are insufficient. Studies must consider the human–machine cooperation-based approach in developing the AI model. This study proposes a collaborative approach through which the explainable AI (XAI) model, a self-explaining deep neural network for text classification, communicates with users. The proposed XAI model provides users with an explanation for the model prediction along with the prediction results for patent evaluation. Users provide feedback based on the model predictions and their explanations. The source XAI model is refined via relearning by reflecting on user feedback. This study experiments to assess model improvement using the human collaboration method. As for the human collaborative method, this study considers the process of human intervention independent of the XAI model's results as well as the method of human participation based on the explanation presented by the XAI model. The experimental results verified the XAI model performance, showing the highest accuracy (0.890) and F1 score (0.916), such that the model can be applied efficiently to patent evaluation. The XAI–human collaboration model presented in this study can also be expanded and applied to technology intelligence tasks. However, the collaborative approach in this study has complete trust in human advice from technical experts; thus, subsequent collaborative XAI models could be improved by communicating bidirectionally with human resources as a complementary relationship.</div></div>","PeriodicalId":50523,"journal":{"name":"Engineering Applications of Artificial Intelligence","volume":"154 ","pages":"Article 110984"},"PeriodicalIF":7.5000,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An explainable artificial intelligence – human collaborative model for investigating patent novelty\",\"authors\":\"Hyejin Jang , Byungun Yoon\",\"doi\":\"10.1016/j.engappai.2025.110984\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><div>With the accumulation of technology-related big data, including the patent database, existing studies have proposed a framework for patent analysis using natural language processing models. Artificial intelligence (AI) applications require human experience and insight based on the understanding of complex environments and uncertainties and model predictive performance. However, existing research has focused on applying big data and developing automated processes. Actual user understanding and the consideration of model usability are insufficient. Studies must consider the human–machine cooperation-based approach in developing the AI model. This study proposes a collaborative approach through which the explainable AI (XAI) model, a self-explaining deep neural network for text classification, communicates with users. The proposed XAI model provides users with an explanation for the model prediction along with the prediction results for patent evaluation. Users provide feedback based on the model predictions and their explanations. The source XAI model is refined via relearning by reflecting on user feedback. This study experiments to assess model improvement using the human collaboration method. As for the human collaborative method, this study considers the process of human intervention independent of the XAI model's results as well as the method of human participation based on the explanation presented by the XAI model. The experimental results verified the XAI model performance, showing the highest accuracy (0.890) and F1 score (0.916), such that the model can be applied efficiently to patent evaluation. The XAI–human collaboration model presented in this study can also be expanded and applied to technology intelligence tasks. However, the collaborative approach in this study has complete trust in human advice from technical experts; thus, subsequent collaborative XAI models could be improved by communicating bidirectionally with human resources as a complementary relationship.</div></div>\",\"PeriodicalId\":50523,\"journal\":{\"name\":\"Engineering Applications of Artificial Intelligence\",\"volume\":\"154 \",\"pages\":\"Article 110984\"},\"PeriodicalIF\":7.5000,\"publicationDate\":\"2025-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Engineering Applications of Artificial Intelligence\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0952197625009844\",\"RegionNum\":2,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"AUTOMATION & CONTROL SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Engineering Applications of Artificial Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0952197625009844","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"AUTOMATION & CONTROL SYSTEMS","Score":null,"Total":0}
An explainable artificial intelligence – human collaborative model for investigating patent novelty
With the accumulation of technology-related big data, including the patent database, existing studies have proposed a framework for patent analysis using natural language processing models. Artificial intelligence (AI) applications require human experience and insight based on the understanding of complex environments and uncertainties and model predictive performance. However, existing research has focused on applying big data and developing automated processes. Actual user understanding and the consideration of model usability are insufficient. Studies must consider the human–machine cooperation-based approach in developing the AI model. This study proposes a collaborative approach through which the explainable AI (XAI) model, a self-explaining deep neural network for text classification, communicates with users. The proposed XAI model provides users with an explanation for the model prediction along with the prediction results for patent evaluation. Users provide feedback based on the model predictions and their explanations. The source XAI model is refined via relearning by reflecting on user feedback. This study experiments to assess model improvement using the human collaboration method. As for the human collaborative method, this study considers the process of human intervention independent of the XAI model's results as well as the method of human participation based on the explanation presented by the XAI model. The experimental results verified the XAI model performance, showing the highest accuracy (0.890) and F1 score (0.916), such that the model can be applied efficiently to patent evaluation. The XAI–human collaboration model presented in this study can also be expanded and applied to technology intelligence tasks. However, the collaborative approach in this study has complete trust in human advice from technical experts; thus, subsequent collaborative XAI models could be improved by communicating bidirectionally with human resources as a complementary relationship.
期刊介绍:
Artificial Intelligence (AI) is pivotal in driving the fourth industrial revolution, witnessing remarkable advancements across various machine learning methodologies. AI techniques have become indispensable tools for practicing engineers, enabling them to tackle previously insurmountable challenges. Engineering Applications of Artificial Intelligence serves as a global platform for the swift dissemination of research elucidating the practical application of AI methods across all engineering disciplines. Submitted papers are expected to present novel aspects of AI utilized in real-world engineering applications, validated using publicly available datasets to ensure the replicability of research outcomes. Join us in exploring the transformative potential of AI in engineering.