Graph-based deep fusion for architectural text representation.

IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE
PeerJ Computer Science Pub Date : 2025-03-19 eCollection Date: 2025-01-01 DOI:10.7717/peerj-cs.2735
Shaoyun Hu, Qingxiong Weng
{"title":"Graph-based deep fusion for architectural text representation.","authors":"Shaoyun Hu, Qingxiong Weng","doi":"10.7717/peerj-cs.2735","DOIUrl":null,"url":null,"abstract":"<p><p>Amidst the swift global urbanization and rapid evolution of the architecture industry, there is a growing demand for the automated processing of architectural textual information. This demand arises from the abundance of specialized vocabulary in architectural texts, posing a challenge for accurate representation using traditional models. To address this, we propose a novel fusion method that integrates Transformer-based models with graph neural networks (GNNs) for architectural text representation. While independently utilizing Bidirectional Encoder Representations from Transformers (BERT) and the robustly optimized BERT approach (RoBERTa) to generate initial document representations, we also employ term frequency-inverse document frequency (TF-IDF) to extract keywords from each document and construct a corresponding keyword set. Subsequently, a graph is created based on the keyword vocabulary and document embeddings, which is then fed into the graph attention network (GAT). The final document embedding is generated by GAT, and the text embedding is crafted by the attention module and neural network structure of the GAT. Experimental results from comparison studies show that the proposed model outperforms all baselines. Additionally, ablation studies demonstrate the effectiveness of each module, further reinforcing the robustness and superiority of our approach.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2735"},"PeriodicalIF":3.5000,"publicationDate":"2025-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11935773/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2735","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Amidst the swift global urbanization and rapid evolution of the architecture industry, there is a growing demand for the automated processing of architectural textual information. This demand arises from the abundance of specialized vocabulary in architectural texts, posing a challenge for accurate representation using traditional models. To address this, we propose a novel fusion method that integrates Transformer-based models with graph neural networks (GNNs) for architectural text representation. While independently utilizing Bidirectional Encoder Representations from Transformers (BERT) and the robustly optimized BERT approach (RoBERTa) to generate initial document representations, we also employ term frequency-inverse document frequency (TF-IDF) to extract keywords from each document and construct a corresponding keyword set. Subsequently, a graph is created based on the keyword vocabulary and document embeddings, which is then fed into the graph attention network (GAT). The final document embedding is generated by GAT, and the text embedding is crafted by the attention module and neural network structure of the GAT. Experimental results from comparison studies show that the proposed model outperforms all baselines. Additionally, ablation studies demonstrate the effectiveness of each module, further reinforcing the robustness and superiority of our approach.

基于图的建筑文本表示深度融合。
随着全球城市化进程的加快和建筑行业的快速发展,对建筑文本信息自动化处理的需求日益增长。这种需求源于建筑文本中丰富的专业词汇,这对使用传统模型进行准确表达提出了挑战。为了解决这个问题,我们提出了一种新的融合方法,将基于变压器的模型与图神经网络(gnn)集成在一起,用于建筑文本表示。在独立使用双向编码器表示(BERT)和鲁棒优化的BERT方法(RoBERTa)来生成初始文档表示的同时,我们还使用术语频率逆文档频率(TF-IDF)从每个文档中提取关键字并构建相应的关键字集。然后,基于关键词词汇表和文档嵌入创建图形,然后将其输入到图形注意网络(GAT)中。最终的文档嵌入由GAT生成,文本嵌入由GAT的注意力模块和神经网络结构精心制作。对比研究的实验结果表明,该模型优于所有基线。此外,消融研究证明了每个模块的有效性,进一步增强了我们方法的稳健性和优越性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
PeerJ Computer Science
PeerJ Computer Science Computer Science-General Computer Science
CiteScore
6.10
自引率
5.30%
发文量
332
审稿时长
10 weeks
期刊介绍: PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信