{"title":"DA-BAG:利用自域对抗训练结合 BERT 和 GCN 的多模型融合文本分类方法","authors":"Dangguo Shao, Shun Su, Lei Ma, Sanli Yi, Hua Lai","doi":"10.1007/s10844-024-00889-2","DOIUrl":null,"url":null,"abstract":"<p>Pre-training-based methods are considered some of the most advanced techniques in natural language processing tasks, particularly in text classification. However, these methods often overlook global semantic information. In contrast, traditional graph learning methods focus solely on structured information from text to graph, neglecting the hidden local information within the syntactic structure of the text. When combined, these approaches may introduce new noise and training biases. To tackle these challenges, we introduce DA-BAG, a novel approach that co-trains BERT and graph convolution models. Utilizing a self-domain adversarial training method on a single dataset, DA-BAG extracts multi-domain distribution features across multiple models, enabling self-adversarial domain adaptation training without the need for additional data, thereby enhancing model generalization and robustness. Furthermore, by incorporating an attention mechanism in multiple models, DA-BAG effectively combines the structural semantics of the graph with the token-level semantics of the pre-trained model, leveraging hidden information within the text’s syntactic structure. Additionally, a sequential multi-layer graph convolutional neural(GCN) connection structure based on a residual pre-activation variant is employed to stabilize the feature distribution of graph data and adjust the graph data structure accordingly. Extensive evaluations on 5 datasets(20NG, R8, R52, Ohsumed, MR) demonstrate that DA-BAG achieves state-of-the-art performance across a diverse range of datasets.</p>","PeriodicalId":56119,"journal":{"name":"Journal of Intelligent Information Systems","volume":"38 1","pages":""},"PeriodicalIF":2.3000,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"DA-BAG: A multi-model fusion text classification method combining BERT and GCN using self-domain adversarial training\",\"authors\":\"Dangguo Shao, Shun Su, Lei Ma, Sanli Yi, Hua Lai\",\"doi\":\"10.1007/s10844-024-00889-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Pre-training-based methods are considered some of the most advanced techniques in natural language processing tasks, particularly in text classification. However, these methods often overlook global semantic information. In contrast, traditional graph learning methods focus solely on structured information from text to graph, neglecting the hidden local information within the syntactic structure of the text. When combined, these approaches may introduce new noise and training biases. To tackle these challenges, we introduce DA-BAG, a novel approach that co-trains BERT and graph convolution models. Utilizing a self-domain adversarial training method on a single dataset, DA-BAG extracts multi-domain distribution features across multiple models, enabling self-adversarial domain adaptation training without the need for additional data, thereby enhancing model generalization and robustness. Furthermore, by incorporating an attention mechanism in multiple models, DA-BAG effectively combines the structural semantics of the graph with the token-level semantics of the pre-trained model, leveraging hidden information within the text’s syntactic structure. Additionally, a sequential multi-layer graph convolutional neural(GCN) connection structure based on a residual pre-activation variant is employed to stabilize the feature distribution of graph data and adjust the graph data structure accordingly. Extensive evaluations on 5 datasets(20NG, R8, R52, Ohsumed, MR) demonstrate that DA-BAG achieves state-of-the-art performance across a diverse range of datasets.</p>\",\"PeriodicalId\":56119,\"journal\":{\"name\":\"Journal of Intelligent Information Systems\",\"volume\":\"38 1\",\"pages\":\"\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-09-16\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Intelligent Information Systems\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1007/s10844-024-00889-2\",\"RegionNum\":3,\"RegionCategory\":\"计算机科学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Intelligent Information Systems","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10844-024-00889-2","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
DA-BAG: A multi-model fusion text classification method combining BERT and GCN using self-domain adversarial training
Pre-training-based methods are considered some of the most advanced techniques in natural language processing tasks, particularly in text classification. However, these methods often overlook global semantic information. In contrast, traditional graph learning methods focus solely on structured information from text to graph, neglecting the hidden local information within the syntactic structure of the text. When combined, these approaches may introduce new noise and training biases. To tackle these challenges, we introduce DA-BAG, a novel approach that co-trains BERT and graph convolution models. Utilizing a self-domain adversarial training method on a single dataset, DA-BAG extracts multi-domain distribution features across multiple models, enabling self-adversarial domain adaptation training without the need for additional data, thereby enhancing model generalization and robustness. Furthermore, by incorporating an attention mechanism in multiple models, DA-BAG effectively combines the structural semantics of the graph with the token-level semantics of the pre-trained model, leveraging hidden information within the text’s syntactic structure. Additionally, a sequential multi-layer graph convolutional neural(GCN) connection structure based on a residual pre-activation variant is employed to stabilize the feature distribution of graph data and adjust the graph data structure accordingly. Extensive evaluations on 5 datasets(20NG, R8, R52, Ohsumed, MR) demonstrate that DA-BAG achieves state-of-the-art performance across a diverse range of datasets.
期刊介绍:
The mission of the Journal of Intelligent Information Systems: Integrating Artifical Intelligence and Database Technologies is to foster and present research and development results focused on the integration of artificial intelligence and database technologies to create next generation information systems - Intelligent Information Systems.
These new information systems embody knowledge that allows them to exhibit intelligent behavior, cooperate with users and other systems in problem solving, discovery, access, retrieval and manipulation of a wide variety of multimedia data and knowledge, and reason under uncertainty. Increasingly, knowledge-directed inference processes are being used to:
discover knowledge from large data collections,
provide cooperative support to users in complex query formulation and refinement,
access, retrieve, store and manage large collections of multimedia data and knowledge,
integrate information from multiple heterogeneous data and knowledge sources, and
reason about information under uncertain conditions.
Multimedia and hypermedia information systems now operate on a global scale over the Internet, and new tools and techniques are needed to manage these dynamic and evolving information spaces.
The Journal of Intelligent Information Systems provides a forum wherein academics, researchers and practitioners may publish high-quality, original and state-of-the-art papers describing theoretical aspects, systems architectures, analysis and design tools and techniques, and implementation experiences in intelligent information systems. The categories of papers published by JIIS include: research papers, invited papters, meetings, workshop and conference annoucements and reports, survey and tutorial articles, and book reviews. Short articles describing open problems or their solutions are also welcome.