Information FusionPub Date : 2024-09-13DOI: 10.1016/j.inffus.2024.102695
Yimo Yan , Songyi Cui , Jiahui Liu , Yaping Zhao , Bodong Zhou , Yong-Hong Kuo
{"title":"Multimodal fusion for large-scale traffic prediction with heterogeneous retentive networks","authors":"Yimo Yan , Songyi Cui , Jiahui Liu , Yaping Zhao , Bodong Zhou , Yong-Hong Kuo","doi":"10.1016/j.inffus.2024.102695","DOIUrl":"10.1016/j.inffus.2024.102695","url":null,"abstract":"<div><p>Traffic speed prediction is a critical challenge in transportation research due to the complex spatiotemporal dynamics of urban mobility. This study proposes a novel framework for fusing diverse data modalities to enhance short-term traffic speed forecasting accuracy. We introduce the Heterogeneous Retentive Network (H-RetNet), which integrates multisource urban data into high-dimensional representations encoded with geospatial relationships. By combining the H-RetNet with a Gated Recurrent Unit (GRU), our model captures intricate spatial and temporal correlations. We validate the approach using a real-world Beijing traffic dataset encompassing social media, real estate, and point of interest data. Experiments demonstrate superior performance over existing methods, with the fusion architecture improving robustness. Specifically, we observe a 21.91% reduction in MSE, underscoring the potential of our framework to inform and enhance traffic management strategies.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102695"},"PeriodicalIF":14.7,"publicationDate":"2024-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228692","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2024-09-12DOI: 10.1016/j.inffus.2024.102673
Anirudh Atmakuru , Alen Shahini , Subrata Chakraborty , Silvia Seoni , Massimo Salvi , Abdul Hafeez-Baig , Sadaf Rashid , Ru San Tan , Prabal Datta Barua , Filippo Molinari , U Rajendra Acharya
{"title":"Artificial intelligence-based suicide prevention and prediction: A systematic review (2019–2023)","authors":"Anirudh Atmakuru , Alen Shahini , Subrata Chakraborty , Silvia Seoni , Massimo Salvi , Abdul Hafeez-Baig , Sadaf Rashid , Ru San Tan , Prabal Datta Barua , Filippo Molinari , U Rajendra Acharya","doi":"10.1016/j.inffus.2024.102673","DOIUrl":"10.1016/j.inffus.2024.102673","url":null,"abstract":"<div><p>Suicide is a major global public health concern, and the application of artificial intelligence (AI) methods, such as natural language processing (NLP), machine learning (ML), and deep learning (DL), has shown promise in advancing suicide prediction and prevention efforts. Recent advancements in AI – particularly NLP and DL have opened up new avenues of research in suicide prediction and prevention. While several papers have reviewed specific detection techniques like NLP or DL, there has been no recent study that acts as a one-stop-shop, providing a comprehensive overview of all AI-based studies in this field. In this work, we conduct a systematic literature review to identify relevant studies published between 2019 and 2023, resulting in the inclusion of 156 studies. We provide a comprehensive overview of the current state of research conducted on AI-driven suicide prevention and prediction, focusing on different data types and AI techniques employed. We discuss the benefits and challenges of these approaches and propose future research directions to improve the practical application of AI in suicide research. AI is highly capable of improving the accuracy and efficiency of risk assessment, enabling personalized interventions, and enhancing our understanding of risk and protective factors. Multidisciplinary approaches combining diverse data sources and AI methods can help identify individuals at risk by analyzing social media content, patient histories, and data from mobile devices, enabling timely intervention. However, challenges related to data privacy, algorithmic bias, model interpretability, and real-world implementation must be addressed to realize the full potential of these technologies. Future research should focus on integrating prediction and prevention strategies, harnessing multimodal data, and expanding the scope to include diverse populations. Collaboration across disciplines and stakeholders is essential to ensure that AI-driven suicide prevention and prediction efforts are ethical, culturally sensitive, and person-centered.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102673"},"PeriodicalIF":14.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142240680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2024-09-12DOI: 10.1016/j.inffus.2024.102665
Qishun Wang , Zhengzheng Tu , Chenglong Li , Jin Tang
{"title":"High performance RGB-Thermal Video Object Detection via hybrid fusion with progressive interaction and temporal-modal difference","authors":"Qishun Wang , Zhengzheng Tu , Chenglong Li , Jin Tang","doi":"10.1016/j.inffus.2024.102665","DOIUrl":"10.1016/j.inffus.2024.102665","url":null,"abstract":"<div><p>RGB-Thermal Video Object Detection (RGBT VOD) is to localize and classify the predefined objects in visible and thermal spectrum videos. The key issue in RGBT VOD lies in integrating multi-modal information effectively to improve detection performance. Current multi-modal fusion methods predominantly employ middle fusion strategies, but the inherent modal difference directly influences the effect of multi-modal fusion. Although the early fusion strategy reduces the modality gap in the middle stage of the network, achieving in-depth feature interaction between different modalities remains challenging. In this work, we propose a novel hybrid fusion network called PTMNet, which effectively combines the early fusion strategy with the progressive interaction and the middle fusion strategy with the temporal-modal difference, for high performance RGBT VOD. In particular, we take each modality as a master modality to achieve an early fusion with other modalities as auxiliary information by progressive interaction. Such a design not only alleviates the modality gap but facilitates middle fusion. The temporal-modal difference models temporal information through spatial offsets and utilizes feature erasure between modalities to motivate the network to focus on shared objects in both modalities. The hybrid fusion can achieve high detection accuracy only using three input frames, which makes our PTMNet achieve a high inference speed. Experimental results show that our approach achieves state-of-the-art performance on the VT-VOD50 dataset and also operates at over 70 FPS. The code will be freely released at <span><span>https://github.com/tzz-ahu</span><svg><path></path></svg></span> for academic purposes.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102665"},"PeriodicalIF":14.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2024-09-12DOI: 10.1016/j.inffus.2024.102694
Xiaoyan Zhang, Jiajia Lin
{"title":"Scalable data fusion via a scale-based hierarchical framework: Adapting to multi-source and multi-scale scenarios","authors":"Xiaoyan Zhang, Jiajia Lin","doi":"10.1016/j.inffus.2024.102694","DOIUrl":"10.1016/j.inffus.2024.102694","url":null,"abstract":"<div><p>Multi-source information fusion addresses challenges in integrating and transforming complementary data from diverse sources to facilitate unified information representation for centralized knowledge discovery. However, traditional methods face difficulties when applied to multi-scale data, where optimal scale selection can effectively resolve these issues but typically lack the advantage of identifying the optimal and simplest data from different data source relationships. Moreover, in multi-source, multi-scale environments, heterogeneous data (where identical samples have different features and scales in different sources) is prone to occur. To address these challenges, this study proposes a novel approach in two key stages: first, aggregating heterogeneous data sources and refining datasets using information gain; second, employing a customized <strong>S</strong>cale-<strong>b</strong>ased <strong>T</strong>ree (SbT) structure for each attribute to help extract specific scale information value, thereby achieving effective data fusion goals. Extensive experimental evaluations cover ten different datasets, reporting detailed performance across multiple metrics including <strong>A</strong>pproximation <strong>P</strong>recision (AP), <strong>A</strong>pproximation <strong>Q</strong>uality (AQ) values, classification accuracy, and computational efficiency. The results highlight the robustness and effectiveness of our proposed algorithm in handling complex multi-source, multi-scale data environments, demonstrating its potential and practicality in addressing real-world data fusion challenges.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102694"},"PeriodicalIF":14.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2024-09-12DOI: 10.1016/j.inffus.2024.102693
Xuanhao Yang , Hangjun Che , Man-Fai Leung
{"title":"Tensor-based unsupervised feature selection for error-robust handling of unbalanced incomplete multi-view data","authors":"Xuanhao Yang , Hangjun Che , Man-Fai Leung","doi":"10.1016/j.inffus.2024.102693","DOIUrl":"10.1016/j.inffus.2024.102693","url":null,"abstract":"<div><p>Recent advancements in multi-view unsupervised feature selection (MUFS) have been notable, yet two primary challenges persist. First, real-world datasets frequently consist of unbalanced incomplete multi-view data, a scenario not adequately addressed by current MUFS methodologies. Second, the inherent complexity and heterogeneity of multi-view data often introduce significant noise, an aspect largely neglected by existing approaches, compromising their noise robustness. To tackle these issues, this paper introduces a Tensor-Based Error Robust Unbalanced Incomplete Multi-view Unsupervised Feature Selection (TERUIMUFS) strategy. The proposed MUFS framework specifically caters to unbalanced incomplete multi-view data, incorporating self-representation learning with a tensor low-rank constraint and sample diversity learning. This approach not only mitigates errors in the self-representation process but also corrects errors in the self-representation tensor, significantly enhancing the model’s resilience to noise. Furthermore, graph learning serves as a pivotal link between MUFS and self-representation learning. An innovative iterative optimization algorithm is developed for TERUIMUFS, complete with a thorough analysis of its convergence and computational complexity. Experimental results demonstrate TERUIMUFS’s effectiveness and competitiveness in addressing unbalanced incomplete multi-view unsupervised feature selection (UIMUFS), marking a significant advancement in the field.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102693"},"PeriodicalIF":14.7,"publicationDate":"2024-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2024-09-10DOI: 10.1016/j.inffus.2024.102669
Yujie Mo , Heng Tao Shen , Xiaofeng Zhu
{"title":"Unsupervised multi-view graph representation learning with dual weight-net","authors":"Yujie Mo , Heng Tao Shen , Xiaofeng Zhu","doi":"10.1016/j.inffus.2024.102669","DOIUrl":"10.1016/j.inffus.2024.102669","url":null,"abstract":"<div><p>Unsupervised multi-view graph representation learning (UMGRL) aims to capture the complex relationships in the multi-view graph without human annotations, so it has been widely applied in real-world applications. However, existing UMGRL methods still face the issues as follows: (i) Previous UMGRL methods tend to overlook the importance of nodes with different influences and the importance of graphs with different relationships, so that they may lose discriminative information in nodes with large influences and graphs with important relationships. (ii) Previous UMGRL methods generally ignore the heterophilic edges in the multi-view graph to possibly introduce noise from different classes into node representations. To address these issues, we propose a novel bi-level optimization UMGRL framework with dual weight-net. Specifically, the lower-level optimizes the parameters of encoders to obtain node representations of different graphs, while the upper-level optimizes the parameters of the dual weight-net to adaptively and dynamically capture the importance of node level, graph level, and edge level, thus obtaining discriminative fused representations for downstream tasks. Moreover, theoretical analysis demonstrates that the proposed method shows a better generalization ability on downstream tasks, compared to previous UMGRL methods. Extensive experimental results verify the effectiveness of the proposed method on public datasets, in terms of different downstream tasks, compared to numerous comparison methods.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102669"},"PeriodicalIF":14.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evolving intra-and inter-session graph fusion for next item recommendation","authors":"Jain-Wun Su , Chiao-Ting Chen , De-Ren Toh , Szu-Hao Huang","doi":"10.1016/j.inffus.2024.102691","DOIUrl":"10.1016/j.inffus.2024.102691","url":null,"abstract":"<div><p>Next-item recommendation aims to predict users’ subsequent behaviors using their historical sequence data. However, sessions are often anonymous, short, and time-varying, making it challenging to capture accurate and evolving item representations. Existing methods using static graphs may fail to model the evolving semantics of items over time. To address this problem, we propose the Evolving Intra-session and Inter-session Graph Neural Network (EII-GNN) to capture the evolving item semantics by fusing global and local graph information. EII-GNN utilizes a global dynamic graph to model inter-session item transitions and update item embeddings at each timestamp. It also constructs a per-session graph with shortcut edges to learn complex intra-session patterns. To personalize recommendations, a history-aware GRU applies the user’s past sessions. We fuse the inter-session graph, intra-session graph, and history embeddings to obtain the session representation for final recommendation. Our model performed well in experiments with three real-world data sets against its state-of-the-art counterparts.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102691"},"PeriodicalIF":14.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Competitive resource allocation on a network considering opinion dynamics with self-confidence evolution","authors":"Xia Chen , Zhaogang Ding , Yuan Gao , Hengjie Zhang , Yucheng Dong","doi":"10.1016/j.inffus.2024.102680","DOIUrl":"10.1016/j.inffus.2024.102680","url":null,"abstract":"<div><p>The formation of public opinion is typically influenced by different stakeholders, such as governments and firms. Recently, various real-world problems related to the management of public opinion have emerged, necessitating stakeholders to strategically allocate resources on networks to achieve their objectives. To address this, it is imperative to consider the dynamics of opinion formation. Notably, in existing opinion dynamics models, individuals possess self-confidence parameters reflecting their adherence to historical opinions. However, most extant studies assume the individuals’ self-confidence levels remain constant over time, which cannot accurately capture the intricacies of human behavior. In response to this gap, we first introduce a self-confidence evolution model, which encompasses two influencing factors: the self-confidence levels of one's group mates and the passage of time. Furthermore, we present the social network DeGroot model with self-confidence evolution, and conduct some theoretical analyses. Moreover, we propose a game model to identify the optimal resource allocation strategies of players on a network. Finally, we provide sensitivity analyses, comparative studies, and a case study. This paper highlights the significance of incorporating self-confidence evolution into the process of opinion dynamics, and the results can provide valuable practical insights for players seeking to improve their optimal resource allocation on a network to more effectively manage public opinions.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102680"},"PeriodicalIF":14.7,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142231927","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2024-09-08DOI: 10.1016/j.inffus.2024.102689
Beibei Yu , Jiayi Li , Xin Huang
{"title":"STSNet: A cross-spatial resolution multi-modal remote sensing deep fusion network for high resolution land-cover segmentation","authors":"Beibei Yu , Jiayi Li , Xin Huang","doi":"10.1016/j.inffus.2024.102689","DOIUrl":"10.1016/j.inffus.2024.102689","url":null,"abstract":"<div><p>Recently, deep learning models have found extensive application in high-resolution land-cover segmentation research. However, the most current research still suffers from issues such as insufficient utilization of multi-modal information, which limits further improvement in high-resolution land-cover segmentation accuracy. Moreover, differences in the size and spatial resolution of multi-modal datasets collectively pose challenges to multi-modal land-cover segmentation. Therefore, we propose a high-resolution land-cover segmentation network (STSNet) with cross-spatial resolution <strong>s</strong>patio-<strong>t</strong>emporal-<strong>s</strong>pectral deep fusion. This network effectively utilizes spatio-temporal-spectral features to achieve information complementary among multi-modal data. Specifically, STSNet consists of four components: (1) A high resolution and multi-scale spatial-spectral encoder to jointly extract subtle spatial-spectral features in hyperspectral and high spatial resolution images. (2) A long-term spatio-temporal encoder formulated by spectral convolution and spatio-temporal transformer block to simultaneously delineates the spatial, temporal and spectral information in dense time series Sentinel-2 imagery. (3) A cross-resolution fusion module to alleviate the spatial resolution differences between multi-modal data and effectively leverages complementary spatio-temporal-spectral information. (4) A multi-scale decoder integrates multi-scale information from multi-modal data. We utilized airborne hyperspectral remote sensing imagery from the Shenyang region of China in 2020, with a spatial resolution of 1authors declare that they have no known competm, a spectral number of 249, and a spectral resolution ≤ 5 nm, and its Sentinel dense time-series images acquired in the same period with a spatial resolution of 10 m, a spectral number of 10, and a time-series number of 31. These datasets were combined to generate a multi-modal dataset called WHU-H<sup>2</sup>SR-MT, which is the first open accessed large-scale high spatio-temporal-spectral satellite remote sensing dataset (<em>i.e.</em>, with >2500 image pairs sized 300 <em>m</em> × 300 m for each). Additionally, we employed two open-source datasets to validate the effectiveness of the proposed modules. Extensive experiments show that our multi-scale spatial-spectral encoder, spatio-temporal encoder, and cross-resolution fusion module outperform existing state-of-the-art (SOTA) algorithms in terms of overall performance on high-resolution land-cover segmentation. The new multi-modal dataset will be made available at <span><span>http://irsip.whu.edu.cn/resources/resources_en_v2.php</span><svg><path></path></svg></span>, along with the corresponding code for accessing and utilizing the dataset at <span><span>https://github.com/RS-Mage/STSNet</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102689"},"PeriodicalIF":14.7,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173504","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information FusionPub Date : 2024-09-07DOI: 10.1016/j.inffus.2024.102682
Yuhuan Lu , Wei Wang , Rufan Bai , Shengwei Zhou , Lalit Garg , Ali Kashif Bashir , Weiwei Jiang , Xiping Hu
{"title":"Hyper-relational interaction modeling in multi-modal trajectory prediction for intelligent connected vehicles in smart cites","authors":"Yuhuan Lu , Wei Wang , Rufan Bai , Shengwei Zhou , Lalit Garg , Ali Kashif Bashir , Weiwei Jiang , Xiping Hu","doi":"10.1016/j.inffus.2024.102682","DOIUrl":"10.1016/j.inffus.2024.102682","url":null,"abstract":"<div><p>Trajectory prediction of surrounding traffic participants is vital for the driving safety of Intelligent Connected Vehicles (ICVs). It has been enabled with the help of the availability of multi-sensor information collected by ICVs. For accurately predicting the future movements of traffic agents, it is crucial to subtlety model the inter-agent interaction. However, existing works focus on the correlations between agents and the map information while neglecting the importance of directly modeling the impact of map elements on inter-agent interactions, the direct modeling of which is beneficial for the representation of agent behaviors. Against this background, we propose to model the hyper-relational interaction, which incorporates map elements into the inter-agent interaction. To tackle the hyper-relational interaction, we propose a novel Hyper-relational Multi-modal Trajectory Prediction (HyperMTP) approach. Specifically, a hyper-relational driving graph is first constructed and the hyper-relational interaction is represented as the hyperedge, directly connecting to various nodes (i.e., agents and map elements). Then a structure-aware embedding initialization technique is developed to obtain unbiased initial embeddings. Afterward, hypergraph dual-attention networks are designed to capture correlations between graph elements while retaining the hyper-relational structure. Finally, a heterogeneous Transformer is devised to further capture the correlations between agents’ states and their corresponding hyper-relational interactions. Experimental results show that HyperMTP consistently outperforms the best-performing baseline with an average improvement of 4.8% across two real-world datasets. Moreover, HyperMTP also boosts the interpretability of trajectory prediction by quantifying the impact of map elements on inter-agent interactions.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102682"},"PeriodicalIF":14.7,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142168213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}