{"title":"MDCKE: Multimodal deep-context knowledge extractor that integrates contextual information","authors":"Hyojin Ko, Joon Yoo, Ok-Ran Jeong","doi":"10.1016/j.aej.2025.01.119","DOIUrl":null,"url":null,"abstract":"<div><div>Extraction of comprehensive information from diverse data sources remains a significant challenge in contemporary research. Although multimodal Named Entity Recognition (NER) and Relation Extraction (RE) tasks have garnered significant attention, existing methods often focus on surface-level information, underutilizing the potential depth of the available data. To address this issue, this study introduces a Multimodal Deep-Context Knowledge Extractor (MDCKE) that generates hierarchical multi-scale images and captions from original images. These connectors between image and text enhance information extraction by integrating more complex data relationships and contexts to build a multimodal knowledge graph. Captioning precedes feature extraction, leveraging semantic descriptions to align global and local image features and enhance inter- and intramodality alignment. Experimental validation on the Twitter2015 and Multimodal Neural Relation Extraction (MNRE) datasets demonstrated the novelty and accuracy of MDCKE, resulting in an improvement in the F1-score by up to 5.83% and 26.26%, respectively, compared to State-Of-The-Art (SOTA) models. MDCKE was compared with top models, case studies, and simulations in low-resource settings, proving its flexibility and efficacy. An ablation study further corroborated the contribution of each component, resulting in an approximately 6% enhancement in the F1-score across the datasets.</div></div>","PeriodicalId":7484,"journal":{"name":"alexandria engineering journal","volume":"119 ","pages":"Pages 478-492"},"PeriodicalIF":6.2000,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"alexandria engineering journal","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1110016825001474","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"ENGINEERING, MULTIDISCIPLINARY","Score":null,"Total":0}
引用次数: 0
Abstract
Extraction of comprehensive information from diverse data sources remains a significant challenge in contemporary research. Although multimodal Named Entity Recognition (NER) and Relation Extraction (RE) tasks have garnered significant attention, existing methods often focus on surface-level information, underutilizing the potential depth of the available data. To address this issue, this study introduces a Multimodal Deep-Context Knowledge Extractor (MDCKE) that generates hierarchical multi-scale images and captions from original images. These connectors between image and text enhance information extraction by integrating more complex data relationships and contexts to build a multimodal knowledge graph. Captioning precedes feature extraction, leveraging semantic descriptions to align global and local image features and enhance inter- and intramodality alignment. Experimental validation on the Twitter2015 and Multimodal Neural Relation Extraction (MNRE) datasets demonstrated the novelty and accuracy of MDCKE, resulting in an improvement in the F1-score by up to 5.83% and 26.26%, respectively, compared to State-Of-The-Art (SOTA) models. MDCKE was compared with top models, case studies, and simulations in low-resource settings, proving its flexibility and efficacy. An ablation study further corroborated the contribution of each component, resulting in an approximately 6% enhancement in the F1-score across the datasets.
期刊介绍:
Alexandria Engineering Journal is an international journal devoted to publishing high quality papers in the field of engineering and applied science. Alexandria Engineering Journal is cited in the Engineering Information Services (EIS) and the Chemical Abstracts (CA). The papers published in Alexandria Engineering Journal are grouped into five sections, according to the following classification:
• Mechanical, Production, Marine and Textile Engineering
• Electrical Engineering, Computer Science and Nuclear Engineering
• Civil and Architecture Engineering
• Chemical Engineering and Applied Sciences
• Environmental Engineering