Sequential citation counts prediction enhanced by dynamic contents

IF 3.5 2区 管理学 Q2 COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS
Guoxiu He , Sichen Gu , Zhikai Xue , Yufeng Duan , Xiaomin Zhu
{"title":"Sequential citation counts prediction enhanced by dynamic contents","authors":"Guoxiu He ,&nbsp;Sichen Gu ,&nbsp;Zhikai Xue ,&nbsp;Yufeng Duan ,&nbsp;Xiaomin Zhu","doi":"10.1016/j.joi.2025.101645","DOIUrl":null,"url":null,"abstract":"<div><div>The assessment of the impact of scholarly publications has garnered significant attention among researchers, particularly in predicting the future sequence of citation counts. However, current studies predominantly regard academic papers as static entities, failing to acknowledge the dynamic nature of their fixed content, which can undergo shifts in focus over time. To this end, we implement dynamic representations of the content to mirror chronological changes within the given paper, facilitating the sequential prediction of citation counts. Specifically, we propose a novel deep neural network called <strong>D</strong>ynam<strong>I</strong>c <strong>C</strong>ontent-aware <strong>T</strong>r<strong>A</strong>nsformer (DICTA). The proposed model incorporates a dynamic content module that leverages the power of a sequential module to effectively capture the evolving focus information within each paper. To account for dependencies between the historical and future citation counts, our model utilizes a transformer-based framework as the backbone. With the encoder-decoder structure, it can effectively handle previous citation accumulations and then predict future citation potentials. Extensive experiments conducted on two scientific datasets demonstrate that DICTA achieves impressive performance and outperforms all baseline approaches. Further analyses underscore the significance of the dynamic content module. The code is available: <span><span>https://github.com/ECNU-Text-Computing/DICTA</span><svg><path></path></svg></span></div></div>","PeriodicalId":48662,"journal":{"name":"Journal of Informetrics","volume":"19 2","pages":"Article 101645"},"PeriodicalIF":3.5000,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Informetrics","FirstCategoryId":"91","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1751157725000094","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
引用次数: 0

Abstract

The assessment of the impact of scholarly publications has garnered significant attention among researchers, particularly in predicting the future sequence of citation counts. However, current studies predominantly regard academic papers as static entities, failing to acknowledge the dynamic nature of their fixed content, which can undergo shifts in focus over time. To this end, we implement dynamic representations of the content to mirror chronological changes within the given paper, facilitating the sequential prediction of citation counts. Specifically, we propose a novel deep neural network called DynamIc Content-aware TrAnsformer (DICTA). The proposed model incorporates a dynamic content module that leverages the power of a sequential module to effectively capture the evolving focus information within each paper. To account for dependencies between the historical and future citation counts, our model utilizes a transformer-based framework as the backbone. With the encoder-decoder structure, it can effectively handle previous citation accumulations and then predict future citation potentials. Extensive experiments conducted on two scientific datasets demonstrate that DICTA achieves impressive performance and outperforms all baseline approaches. Further analyses underscore the significance of the dynamic content module. The code is available: https://github.com/ECNU-Text-Computing/DICTA
动态内容增强的顺序引文计数预测
学术出版物的影响评估已经引起了研究人员的极大关注,特别是在预测未来引用计数顺序方面。然而,目前的研究主要将学术论文视为静态实体,未能认识到其固定内容的动态性质,这些内容可能随着时间的推移而发生焦点变化。为此,我们实现了内容的动态表示,以反映给定论文中时间顺序的变化,促进了引用计数的顺序预测。具体来说,我们提出了一种新的深度神经网络,称为动态内容感知转换器(DICTA)。所提出的模型包含一个动态内容模块,该模块利用顺序模块的功能有效地捕获每篇论文中不断发展的焦点信息。为了考虑历史和未来引文计数之间的依赖关系,我们的模型使用基于转换器的框架作为主干。采用编码器-解码器结构,可以有效地处理之前的引文积累,进而预测未来的引文潜力。在两个科学数据集上进行的大量实验表明,DICTA取得了令人印象深刻的性能,并且优于所有基线方法。进一步的分析强调了动态内容模块的重要性。代码是可用的:https://github.com/ECNU-Text-Computing/DICTA
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
Journal of Informetrics
Journal of Informetrics Social Sciences-Library and Information Sciences
CiteScore
6.40
自引率
16.20%
发文量
95
期刊介绍: Journal of Informetrics (JOI) publishes rigorous high-quality research on quantitative aspects of information science. The main focus of the journal is on topics in bibliometrics, scientometrics, webometrics, patentometrics, altmetrics and research evaluation. Contributions studying informetric problems using methods from other quantitative fields, such as mathematics, statistics, computer science, economics and econometrics, and network science, are especially encouraged. JOI publishes both theoretical and empirical work. In general, case studies, for instance a bibliometric analysis focusing on a specific research field or a specific country, are not considered suitable for publication in JOI, unless they contain innovative methodological elements.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信