Information Fusion最新文献

筛选
英文 中文
Deep-TCP: Multi-source data fusion for deep learning-powered tropical cyclone intensity prediction to enhance urban sustainability Deep-TCP:多源数据融合用于深度学习驱动的热带气旋强度预测,以增强城市可持续性
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-07 DOI: 10.1016/j.inffus.2024.102670
Shuailong Jiang , Maohan Liang , Chunzai Wang , Hanjie Fan , Yingying Ma
{"title":"Deep-TCP: Multi-source data fusion for deep learning-powered tropical cyclone intensity prediction to enhance urban sustainability","authors":"Shuailong Jiang ,&nbsp;Maohan Liang ,&nbsp;Chunzai Wang ,&nbsp;Hanjie Fan ,&nbsp;Yingying Ma","doi":"10.1016/j.inffus.2024.102670","DOIUrl":"10.1016/j.inffus.2024.102670","url":null,"abstract":"<div><p>Tropical cyclones (TC) exert a profound impact on cities, causing extensive damage and losses. Thus, TC Intensity Prediction is crucial for creating sustainable cities as it enables proactive measures to be taken, including evacuation planning, infrastructure reinforcement, and emergency response coordination. In this study, we propose a Deep learning-powered TC Intensity Prediction (Deep-TCP) framework. In particular, Deep-TCP contains a data constraint module for fusing data features from multiple sources and establishing a unified global representation. To capture the spatiotemporal attributes, a Spatial-Temporal Attention (ST-Attention) module is built to distill insights from environmental variables. To improve the robustness and stability of the predictions, an encoder-decoder module that utilizes the ConvGPU unit is introduced to enhance feature maps. Then, a novel feature enhancement module is built to bolster the generalization capability and solve the dependency attenuation. The results demonstrate that the Deep-TCP framework significantly outperforms various benchmarks. Additionally, it effectively predicts multiple TC categories within the 6–24 h timeframe, showing strong capability in predicting changing trends. The reliable prediction results are potentially beneficial for disaster management and urban planning, significantly enhancing urban sustainability by improving preparedness and response strategies.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102670"},"PeriodicalIF":14.7,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142240688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SFGCN: Synergetic fusion-based graph convolutional networks approach for link prediction in social networks SFGCN:基于协同融合的图卷积网络社交网络链接预测方法
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-07 DOI: 10.1016/j.inffus.2024.102684
Sang-Woong Lee , Jawad Tanveer , Amir Masoud Rahmani , Hamid Alinejad-Rokny , Parisa Khoshvaght , Gholamreza Zare , Pegah Malekpour Alamdari , Mehdi Hosseinzadeh
{"title":"SFGCN: Synergetic fusion-based graph convolutional networks approach for link prediction in social networks","authors":"Sang-Woong Lee ,&nbsp;Jawad Tanveer ,&nbsp;Amir Masoud Rahmani ,&nbsp;Hamid Alinejad-Rokny ,&nbsp;Parisa Khoshvaght ,&nbsp;Gholamreza Zare ,&nbsp;Pegah Malekpour Alamdari ,&nbsp;Mehdi Hosseinzadeh","doi":"10.1016/j.inffus.2024.102684","DOIUrl":"10.1016/j.inffus.2024.102684","url":null,"abstract":"<div><p>Accurate Link Prediction (LP) in Social Networks (SNs) is crucial for various practical applications, such as recommendation systems and network security. However, traditional techniques often struggle to capture the intricate and multidimensional nature of these networks. This paper presents a novel approach, the Synergetic Fusion-based Graph Convolutional Networks (SFGCN), designed to enhance LP accuracy in SNs. The SFGCN model utilizes a fusion architecture that combines structural features and other attribute data through early, intermediate, and late fusion mechanisms to create improved node and edge representations. We thoroughly evaluate our SFGCN model on seven real-world datasets, encompassing citation networks, co-purchase networks, and academic publication domains. The results demonstrate its superiority over baseline GCN architectures and other selected LP methods, achieving a 6.88 % improvement in accuracy. The experiments show that our model captures the complex interactions and dependencies within SNs, providing a comprehensive understanding of their underlying dynamics. The approach mentioned can be effectively applied in the domain of SN analysis to enhance the accuracy of LP results. This method not only improves the precision of predictions but also enhances the adaptability of the model in diverse SN scenarios.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102684"},"PeriodicalIF":14.7,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Less is more: A closer look at semantic-based few-shot learning 少即是多:近距离观察基于语义的少量学习
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-07 DOI: 10.1016/j.inffus.2024.102672
Chunpeng Zhou , Zhi Yu , Xilu Yuan , Sheng Zhou , Jiajun Bu , Haishuai Wang
{"title":"Less is more: A closer look at semantic-based few-shot learning","authors":"Chunpeng Zhou ,&nbsp;Zhi Yu ,&nbsp;Xilu Yuan ,&nbsp;Sheng Zhou ,&nbsp;Jiajun Bu ,&nbsp;Haishuai Wang","doi":"10.1016/j.inffus.2024.102672","DOIUrl":"10.1016/j.inffus.2024.102672","url":null,"abstract":"<div><p>Few-shot Learning (FSL) aims to learn and distinguish new categories from a scant number of available samples, presenting a significant challenge in the realm of deep learning. Recent researchers have sought to leverage the additional semantic or linguistic information of scarce categories with a pre-trained language model to facilitate learning, thus partially alleviating the problem of insufficient supervision signals. Nonetheless, the full potential of the semantic information and pre-trained language model have been underestimated in the few-shot learning till now, resulting in limited performance enhancements. To address this, we propose a straightforward and efficacious framework for few-shot learning tasks, specifically designed to exploit the semantic information and language model. Specifically, we explicitly harness the zero-shot capability of the pre-trained language model with learnable prompts. And we directly add the visual feature with the textual feature for inference without the intricate designed fusion modules as in prior studies. Additionally, we apply the self-ensemble and distillation to further enhance performance. Extensive experiments conducted across four widely used few-shot datasets demonstrate that our simple framework achieves impressive results. Particularly noteworthy is its outstanding performance in the 1-shot learning task, surpassing the current state-of-the-art by an average of 3.3% in classification accuracy. Our code will be available at <span><span>https://github.com/zhouchunpong/SimpleFewShot</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102672"},"PeriodicalIF":14.7,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Zero-shot sim-to-real transfer using Siamese-Q-Based reinforcement learning 利用基于 Siamese-Q 的强化学习实现模拟到现实的零点转移
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-06 DOI: 10.1016/j.inffus.2024.102664
Zhenyu Zhang, Shaorong Xie, Han Zhang, Xiangfeng Luo, Hang Yu
{"title":"Zero-shot sim-to-real transfer using Siamese-Q-Based reinforcement learning","authors":"Zhenyu Zhang,&nbsp;Shaorong Xie,&nbsp;Han Zhang,&nbsp;Xiangfeng Luo,&nbsp;Hang Yu","doi":"10.1016/j.inffus.2024.102664","DOIUrl":"10.1016/j.inffus.2024.102664","url":null,"abstract":"<div><p>To address real world decision problems in reinforcement learning, it is common to train a policy in a simulator first for safety. Unfortunately, the sim-real gap hinders effective simulation-to-real transfer without substantial training data. However, collecting real samples of complex tasks is often impractical, and the sample inefficiency of reinforcement learning exacerbates the simulation-to-real problem, even with online interaction or data. Representation learning can improve sample efficiency while keeping generalization by projecting high-dimensional inputs into low-dimensional representations. However, whether trained independently or simultaneously with reinforcement learning, representation learning remains a separate auxiliary task, lacking task-related features and generalization for simulation-to-real transfer. This paper proposes Siamese-Q, a new representation learning method employing Siamese networks and zero-shot simulation-to-real transfer, which narrows the distance between inputs with the same semantics in the latent space with respect to Q values. This allows us to fuse task-related information into the representation and improve the generalization of the policy. Evaluation in virtual and real autonomous vehicle scenarios demonstrates substantial improvements of 19.5% and 94.2% respectively over conventional representation learning, without requiring any real-world observations or on-policy interaction, and enabling reinforcement learning policies trained in simulations transfer to reality.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102664"},"PeriodicalIF":14.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142158446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal manifold learning using kernel interpolation along geodesic paths 利用沿大地路径的核插值进行多模态流形学习
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-06 DOI: 10.1016/j.inffus.2024.102637
Ori Katz , Roy R. Lederman , Ronen Talmon
{"title":"Multimodal manifold learning using kernel interpolation along geodesic paths","authors":"Ori Katz ,&nbsp;Roy R. Lederman ,&nbsp;Ronen Talmon","doi":"10.1016/j.inffus.2024.102637","DOIUrl":"10.1016/j.inffus.2024.102637","url":null,"abstract":"<div><p>In this paper, we present a new spectral analysis and a low-dimensional embedding of two aligned multimodal datasets. Our approach combines manifold learning with the Riemannian geometry of symmetric and positive-definite (SPD) matrices. Manifold learning typically includes the spectral analysis of a single kernel matrix corresponding to a single dataset or a concatenation of several datasets. Here, we use the Riemannian geometry of SPD matrices to devise an interpolation scheme for combining two kernel matrices corresponding to two, possibly multimodal, datasets. We study the way the spectra of the kernels change along geodesic paths on the manifold of SPD matrices. We show that this change enables us, in a purely unsupervised manner, to derive an informative spectral representation of the relations between the two datasets. Based on this representation, we propose a new multimodal manifold learning method. We showcase the performance of the proposed spectral representation and manifold learning method using both simulations and real-measured data from multi-sensor industrial condition monitoring and artificial olfaction. We demonstrate that the proposed method achieves superior results compared to several baselines in terms of the truncated Dirichlet energy.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102637"},"PeriodicalIF":14.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173503","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
New trends of adversarial machine learning for data fusion and intelligent system 用于数据融合和智能系统的对抗式机器学习新趋势
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-06 DOI: 10.1016/j.inffus.2024.102683
Weiping Ding , Zheng Zhang , Luis Martínez , Yu Huang , Zehong (Jimmy) Cao , Jun Liu , Abhirup Banerjee
{"title":"New trends of adversarial machine learning for data fusion and intelligent system","authors":"Weiping Ding ,&nbsp;Zheng Zhang ,&nbsp;Luis Martínez ,&nbsp;Yu Huang ,&nbsp;Zehong (Jimmy) Cao ,&nbsp;Jun Liu ,&nbsp;Abhirup Banerjee","doi":"10.1016/j.inffus.2024.102683","DOIUrl":"10.1016/j.inffus.2024.102683","url":null,"abstract":"","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102683"},"PeriodicalIF":14.7,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142259598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting Android malware: A multimodal fusion method with fine-grained feature 检测安卓恶意软件:具有细粒度特征的多模态融合方法
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-05 DOI: 10.1016/j.inffus.2024.102662
Xun Li , Lei Liu , Yuzhou Liu , Huaxiao Liu
{"title":"Detecting Android malware: A multimodal fusion method with fine-grained feature","authors":"Xun Li ,&nbsp;Lei Liu ,&nbsp;Yuzhou Liu ,&nbsp;Huaxiao Liu","doi":"10.1016/j.inffus.2024.102662","DOIUrl":"10.1016/j.inffus.2024.102662","url":null,"abstract":"<div><p>Context: Recently, many studies have been proposed to address the threat posed by Android malware. However, the continuous evolution of malware poses challenges to the task of representing application features in current detection methods. Objective: This paper introduces a novel Android malware detection approach based on the source code and binary code of software by leveraging large pre-trained models with a fine-grained multimodal fusion strategy. Method: Specifically, the approach treats the source code and binary code as the programming language modality (PM) and machine language modality (MM), respectively. Then, domain-specific knowledge (sensitive API) combined with large pre-trained model is further applied to extract PM features; while the binary code is transformed into RGB images, from which MM features are extracted using a pre-trained image processing model. Furthermore, a fine-grained fusion strategy is implemented using a multi-head self-attention mechanism to effectively capture the correlations among features across different modalities and generate comprehensive features for application malware detection. Results and Conclusion: The detection performance and generalization ability of the proposed method were validated on two experimental datasets. The results demonstrate that our method can accurately distinguish malware, achieving an accuracy of 98.28% and an F1-score of 98.66%. Additionally, it performs well on unseen data, with an accuracy of 92.86% and an F1-score of 94.49%. Meanwhile, ablation experiments confirm the contributions of sensitive API knowledge and the fine-grained multimodal fusion strategy to the success of malware detection.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102662"},"PeriodicalIF":14.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey on occupancy perception for autonomous driving: The information fusion perspective 自动驾驶的乘员感知调查:信息融合视角
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-05 DOI: 10.1016/j.inffus.2024.102671
Huaiyuan Xu, Junliang Chen, Shiyu Meng, Yi Wang, Lap-Pui Chau
{"title":"A survey on occupancy perception for autonomous driving: The information fusion perspective","authors":"Huaiyuan Xu,&nbsp;Junliang Chen,&nbsp;Shiyu Meng,&nbsp;Yi Wang,&nbsp;Lap-Pui Chau","doi":"10.1016/j.inffus.2024.102671","DOIUrl":"10.1016/j.inffus.2024.102671","url":null,"abstract":"<div><p>3D occupancy perception technology aims to observe and understand dense 3D environments for autonomous vehicles. Owing to its comprehensive perception capability, this technology is emerging as a trend in autonomous driving perception systems, and is attracting significant attention from both industry and academia. Similar to traditional bird’s-eye view (BEV) perception, 3D occupancy perception has the nature of multi-source input and the necessity for information fusion. However, the difference is that it captures vertical structures that are ignored by 2D BEV. In this survey, we review the most recent works on 3D occupancy perception, and provide in-depth analyses of methodologies with various input modalities. Specifically, we summarize general network pipelines, highlight information fusion techniques, and discuss effective network training. We evaluate and analyze the occupancy perception performance of the state-of-the-art on the most popular datasets. Furthermore, challenges and future research directions are discussed. We hope this paper will inspire the community and encourage more research work on 3D occupancy perception. A comprehensive list of studies in this survey is publicly available in an active repository that continuously collects the latest work: <span><span>https://github.com/HuaiyuanXu/3D-Occupancy-Perception</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102671"},"PeriodicalIF":14.7,"publicationDate":"2024-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142228690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Contemporary Survey on Multisource Information Fusion for Smart Sustainable Cities: Emerging Trends and Persistent Challenges 多源信息融合促进智能可持续城市的当代调查:新趋势与长期挑战
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-04 DOI: 10.1016/j.inffus.2024.102667
Houda Orchi , Abdoulaye Baniré Diallo , Halima Elbiaze , Essaid Sabir , Mohamed Sadik
{"title":"A Contemporary Survey on Multisource Information Fusion for Smart Sustainable Cities: Emerging Trends and Persistent Challenges","authors":"Houda Orchi ,&nbsp;Abdoulaye Baniré Diallo ,&nbsp;Halima Elbiaze ,&nbsp;Essaid Sabir ,&nbsp;Mohamed Sadik","doi":"10.1016/j.inffus.2024.102667","DOIUrl":"10.1016/j.inffus.2024.102667","url":null,"abstract":"<div><p>The emergence of smart sustainable cities has unveiled a wealth of data sources, each contributing to a vast array of urban applications. At the heart of managing this plethora of data is multisource information fusion (MSIF), a sophisticated approach that not only improves the quality of data collected from myriad sources, including sensors, satellites, social media, and citizen-generated content, but also aids in generating actionable insights crucial for sustainable urban management. Unlike simple data fusion, MSIF excels in harmonizing disparate data sources, effectively navigating through their variability, potential conflicts, and the challenges posed by incomplete datasets. This capability is essential for ensuring the integrity and utility of information, which supports comprehensive insights into urban systems and effective planning. This survey combines hierarchical and multi-dimensional classification to examine how MSIF integrates and analyses diverse datasets, enhancing the operational efficiency and intelligence of urban environments. It highlights the most significant challenges and opportunities presented by MSIF in smart sustainable cities, particularly how it overcomes the limitations of existing approaches in scope and coverage.</p><p>By considering social, economic, and environmental factors, MSIF offers a multidisciplinary approach that is pivotal for advancing sustainable urban development. Recognized as an essential resource for academics and practitioners, this study promotes a new wave of MSIF innovations aimed at improving the cohesion, efficiency, and sustainability of smart cities.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102667"},"PeriodicalIF":14.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Scene understanding method utilizing global visual and spatial interaction features for safety production 利用全局视觉和空间交互特征的场景理解方法促进安全生产
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-04 DOI: 10.1016/j.inffus.2024.102668
Fuqi Ma , Bo Wang , Xuzhu Dong , Min Li , Hengrui Ma , Rong Jia , Amar Jain
{"title":"Scene understanding method utilizing global visual and spatial interaction features for safety production","authors":"Fuqi Ma ,&nbsp;Bo Wang ,&nbsp;Xuzhu Dong ,&nbsp;Min Li ,&nbsp;Hengrui Ma ,&nbsp;Rong Jia ,&nbsp;Amar Jain","doi":"10.1016/j.inffus.2024.102668","DOIUrl":"10.1016/j.inffus.2024.102668","url":null,"abstract":"<div><p>Risk identification in power operations is crucial for both personal safety and power production. Existing risk identification methods mainly use target detection models to identify the common risks but the scene specificity of risk occurrence. For example, not wearing a safety harness, not wearing insulated gloves, etc. Since most methods for detecting safety gears make sense only under specific scene. But the power electric work is a complex object involving many elements such as personnel, equipment and safety tools. Therefore, this paper proposes a scene understanding method that integrates visual features and spatial relationship features among scene elements. This method constructs a scenean undirected scene graph to represent the interactive relationship among the elements, extracts the interactive features by using a graph encoder-decoder convolution module, and fuse perceived high-dimensional visual features and spatial topological features for scene recognition, in order to effectively solve addressing the power operation scene understanding problem under multi-element interaction. Finally, a power inspection operation scenario was chosen as the test case. The outcome of the evaluation indicates results indicate that the proposed approach suggested in this study exhibits superior precision in scene identification and shows ademonstrates strong generalization ability.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":"114 ","pages":"Article 102668"},"PeriodicalIF":14.7,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232594","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信