Information Fusion最新文献

筛选
英文 中文
Triple disentangled representation learning for multimodal affective analysis 用于多模态情感分析的三重分解表征学习
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-03 DOI: 10.1016/j.inffus.2024.102663
{"title":"Triple disentangled representation learning for multimodal affective analysis","authors":"","doi":"10.1016/j.inffus.2024.102663","DOIUrl":"10.1016/j.inffus.2024.102663","url":null,"abstract":"<div><p>In multimodal affective analysis (MAA) tasks, the presence of heterogeneity among different modalities has propelled the exploration of the disentanglement methods as a pivotal area. Many emerging studies focus on disentangling the modality-invariant and modality-specific representations from input data and then fusing them for prediction. However, our study shows that modality-specific representations may contain information that is irrelevant or conflicting with the tasks, which downgrades the effectiveness of learned multimodal representations. We revisit the disentanglement issue, and propose a novel triple disentanglement approach, TriDiRA, which disentangles the modality-invariant, effective modality-specific and ineffective modality-specific representations from input data. By fusing only the modality-invariant and effective modality-specific representations, TriDiRA can significantly alleviate the impact of irrelevant and conflicting information across modalities during model training and prediction. Extensive experiments conducted on four benchmark datasets demonstrate the effectiveness and generalization of our triple disentanglement, which outperforms SOTA methods. The code is available at <span><span>https://anonymous.4open.science/r/TriDiRA</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142144495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frontiers and developments of data augmentation for image: From unlearnable to learnable 图像数据增强的前沿与发展:从不可学到可学
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-03 DOI: 10.1016/j.inffus.2024.102660
{"title":"Frontiers and developments of data augmentation for image: From unlearnable to learnable","authors":"","doi":"10.1016/j.inffus.2024.102660","DOIUrl":"10.1016/j.inffus.2024.102660","url":null,"abstract":"<div><p>Data augmentation is a crucial technique for expanding training datasets, effectively alleviating the overfitting issue that arises from limited training data in deep learning models. This paper takes a fresh perspective and offers a scholarly exploration of image data augmentation, following a logical progression from unlearnable to learnable methods. The paper begins by providing a brief overview of the developmental history of data augmentation. It categorizes data augmentation techniques into unlearnable and learnable based on their “variation” strategies. Furthermore, the paper outlines the fundamental properties of data augmentation, including expansiveness, fidelity, generalizability, and self-adaptability. Subsequently, focusing on unlearnable and learnable data augmentation techniques, the paper further divides them into single-sample and multi-sample, global and local, image domain, and feature domain, categorically reviewing the basic principles and effects of various data augmentation methods based on the differences in the sources, scopes, and content of “variation” attributes. Ultimately, the comparative analysis of diverse data augmentation methodologies in specific tasks is conducted alongside a synthesis and projection of future research directions. By comprehensively analyzing diverse image data augmentation methods from a fresh perspective, this review reveals the intrinsic disparities between unlearnable and learnable data augmentation techniques. It paves the way for scholars to embark on innovative paths in data augmentation.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142144485","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Divergence-guided disentanglement of view-common and view-unique representations for multi-view data 多视图数据的视图共用表示法和视图唯一表示法的发散引导解缠
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-02 DOI: 10.1016/j.inffus.2024.102661
{"title":"Divergence-guided disentanglement of view-common and view-unique representations for multi-view data","authors":"","doi":"10.1016/j.inffus.2024.102661","DOIUrl":"10.1016/j.inffus.2024.102661","url":null,"abstract":"<div><p>In the field of multi-view learning (MVL), it is crucial to extract both common (consistent) and unique (complementary) information across different views. While the focus has traditionally been on acquiring common information, there has been a recent shift towards exploring unique information as well. However, developing an MVL model that can simultaneously capture both common and unique information, thereby facilitating a comprehensive understanding of multi-view data, remains a significant challenge. To address this, we propose the Divergence-guided Multi-view Learning framework (DG-MVL), inspired by information-theoretic learning theory, specifically the generalized divergence measure. This framework employs multi-view autoencoders to disentangle the features obtained from each view into coarse common and unique components. By minimizing the divergence between the coarse common features learned from the common encoder of each view and maximizing the divergence between the coarse unique features from unique encoders simultaneously, we optimize the extraction of both common and unique information. Subsequently, we merge these features to generate a comprehensive and concise representation of the multi-view data, which can be easily utilized for various downstream tasks. We validate our framework on a synthetic multi-view dataset, demonstrating its effectiveness in disentangling common and unique information. Further experiments on various real-world datasets confirm the effectiveness of DG-MVL in capturing common and unique information from multi-view data, resulting in superior classification performance compared to existing methods. Code is available at <span><span>https://github.com/LMFLRB/DG-MVL.git</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AIBIM: An automatic and intelligent structural design pipeline integrating BIM and generative AI 生成式 AIBIM:集成 BIM 和生成式人工智能的自动智能结构设计管道
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-09-01 DOI: 10.1016/j.inffus.2024.102654
{"title":"Generative AIBIM: An automatic and intelligent structural design pipeline integrating BIM and generative AI","authors":"","doi":"10.1016/j.inffus.2024.102654","DOIUrl":"10.1016/j.inffus.2024.102654","url":null,"abstract":"<div><p>AI-based intelligent structural design represents a transformative approach that addresses the inefficiencies inherent in traditional structural design practices. This paper innovates the existing AI-based design frameworks from four aspects and proposes Generative AIBIM: an automatic and intelligent structural design pipeline that integrates Building Information Modeling (BIM) and generative AI. First, the proposed pipeline not only broadens the application scope of BIM, which aligns with BIM's growing relevance in civil engineering, but also marks a significant supplement to previous methods that relied solely on CAD drawings. Second, in Generative AIBIM, a two-stage generation framework incorporating generative AI (TGAI), inspired by the human drawing process, is designed to simplify the complexity of the structural design problem. Third, for the generative AI model in TGAI, this paper pioneers to fuse physical conditions into diffusion models (DMs) to build a novel physics-based conditional diffusion model (PCDM). In contrast to conventional DMs, on the one hand, PCDM directly predicts shear wall drawings to focus on similarity, and on the other hand, PCDM effectively fuses cross-domain information, <em>i.e.,</em> design drawings (image data), timesteps, and physical conditions, by integrating well-designed attention modules. Additionally, a new evaluation system including objective and subjective measures (<em>i.e.</em>, <span><math><mrow><mi>S</mi><mi>c</mi><mi>o</mi><mi>r</mi><msub><mi>e</mi><mtext>IoU</mtext></msub></mrow></math></span> and <span><math><mtext>FID</mtext></math></span>) is designed to comprehensively evaluate models' performance, complementing the evaluation system in the traditional methods only adopting the objective metric. The quantitative results demonstrate that PCDM significantly surpasses recent state-of-the-art (SOTA) techniques (StructGAN and its variants) across both measures: <span><math><mrow><mi>S</mi><mi>c</mi><mi>o</mi><mi>r</mi><msub><mi>e</mi><mtext>IoU</mtext></msub></mrow></math></span> of PCDM is <span><math><mrow><mn>30</mn><mspace></mspace><mo>%</mo></mrow></math></span> higher and <span><math><mtext>FID</mtext></math></span> of PCDM is lower than <span><math><mrow><mn>1</mn><mo>/</mo><mn>3</mn></mrow></math></span> of that of the best competitor. The qualitative experimental results highlight PCDM's superior capabilities in generating high perceptual quality design drawings adhering to essential design criteria. In addition, benefiting from the fusion of physical conditions, PCDM effectively supports diverse and creative designs tailored to building heights and seismic precautionary intensities, showcasing its unique and powerful generation and generalization capabilities. Associated ablation studies further demonstrate the effectiveness of our method.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142158010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating imprecise data in generative models using interval-valued Variational Autoencoders 使用区间值变量自动编码器在生成模型中整合不精确数据
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-08-31 DOI: 10.1016/j.inffus.2024.102659
{"title":"Integrating imprecise data in generative models using interval-valued Variational Autoencoders","authors":"","doi":"10.1016/j.inffus.2024.102659","DOIUrl":"10.1016/j.inffus.2024.102659","url":null,"abstract":"<div><p>Variational Autoencoders (VAEs) enable the integration of diverse data sources into a unified latent representation, facilitating the fusion of information from various inputs and the creation of disentangled representations that separate different factors of variation in the data. Traditional VAEs, however, are limited by assuming a single prior distribution for latent variables, which restricts their ability to handle epistemic uncertainty from imprecise measurements and incomplete data. This paper introduces the Interval-Valued Variational Autoencoder (iVAE), which employs a family of prior distributions and incorporates specialized neurons and redefined objective functions for handling interval-valued data. This architecture maintains computational efficiency while extending the model’s applicability to scenarios with pronounced epistemic uncertainty. The iVAE’s efficacy is demonstrated in managing two types of data: intrinsically interval-valued and noisy data preprocessed into interval formats. The first category is exemplified by a graphical analysis of questionnaires, while the second involves case studies focused on estimating the remaining useful life of aviation engines, where the iVAE outperforms traditional methods, thereby providing more accurate diagnostics and robust predictions.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1566253524004378/pdfft?md5=55dad9708af5cfd29ef59b45dd4d7295&pid=1-s2.0-S1566253524004378-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142128431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Model compression techniques in biometrics applications: A survey 生物识别应用中的模型压缩技术:调查
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-08-31 DOI: 10.1016/j.inffus.2024.102657
{"title":"Model compression techniques in biometrics applications: A survey","authors":"","doi":"10.1016/j.inffus.2024.102657","DOIUrl":"10.1016/j.inffus.2024.102657","url":null,"abstract":"<div><p>The development of deep learning algorithms has extensively empowered humanity’s task automatization capacity. However, the huge improvement in the performance of these models is highly correlated with their increasing level of complexity, limiting their usefulness in human-oriented applications, which are usually deployed in resource-constrained devices. This led to the development of compression techniques that drastically reduce the computational and memory costs of deep learning models without significant performance degradation. These compressed models are especially essential when implementing multi-model fusion solutions where multiple models are required to operate simultaneously. This paper aims to systematize the current literature on this topic by presenting a comprehensive survey of model compression techniques in biometrics applications, namely quantization, knowledge distillation and pruning. We conduct a critical analysis of the comparative value of these techniques, focusing on their advantages and disadvantages and presenting suggestions for future work directions that can potentially improve the current methods. Additionally, we discuss and analyze the link between model bias and model compression, highlighting the need to direct compression research toward model fairness in future works.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1566253524004354/pdfft?md5=24c9687a7370106d00c4347da5201540&pid=1-s2.0-S1566253524004354-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142128432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SSTtrack: A unified hyperspectral video tracking framework via modeling spectral-spatial-temporal conditions SSTtrack:通过光谱-空间-时间条件建模的统一高光谱视频追踪框架
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-08-29 DOI: 10.1016/j.inffus.2024.102658
{"title":"SSTtrack: A unified hyperspectral video tracking framework via modeling spectral-spatial-temporal conditions","authors":"","doi":"10.1016/j.inffus.2024.102658","DOIUrl":"10.1016/j.inffus.2024.102658","url":null,"abstract":"<div><p>Hyperspectral video contains rich spectral, spatial, and temporal conditions that are crucial for capturing complex object variations and overcoming the inherent limitations (e.g., multi-device imaging, modality alignment, and finite spectral bands) of regular RGB and multi-modal video tracking. However, existing hyperspectral tracking methods frequently encounter issues including data anxiety, band gap, huge volume, and weakness of the temporal condition embedded in video sequences, which result in unsatisfactory tracking capabilities. To tackle the dilemmas, we present a unified hyperspectral video tracking framework via modeling spectral-spatial-temporal conditions end-to-end, dubbed SSTtrack. First, we design the multi-modal generation adapter (MGA) to explore the interpretability benefits of combining physical and machine models for learning the multi-modal generation and bridging the band gap. To dynamically transfer and interact with multiple modalities, we then construct a novel spectral-spatial adapter (SSA). Finally, we design a temporal condition adapter (TCA) for injecting the temporal condition to guide spectral and spatial feature representations to capture static and instantaneous object properties. SSTtrack follows the prompt learning paradigm with the addition of few trainable parameters (0.575 M), resulting in superior performance in extensive comparisons. The code will be released at <span><span>https://github.com/YZCU/SSTtrack</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DDBFusion: An unified image decomposition and fusion framework based on dual decomposition and Bézier curves DDBFusion:基于对偶分解和贝塞尔曲线的统一图像分解与融合框架
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-08-28 DOI: 10.1016/j.inffus.2024.102655
{"title":"DDBFusion: An unified image decomposition and fusion framework based on dual decomposition and Bézier curves","authors":"","doi":"10.1016/j.inffus.2024.102655","DOIUrl":"10.1016/j.inffus.2024.102655","url":null,"abstract":"<div><p>Existing image fusion algorithms mostly concentrate on the design of network architecture and loss functions, and using unified feature extraction strategies while neglecting the division of redundant and effective information. However, for complementary information, unified feature extractor may not appropriate. Thus, this paper presents a unified image fusion algorithm based on Bézier curves image augmentation and hierarchical decomposition, and a self-supervised learning task is constructed to learn the meaningful information. Where Bézier curves aim to simulate different image features and constructed special self-supervised learning samples, so our method does not require task specific data and can be easily trained on public natural image datasets. Meanwhile, our dual decomposition self-supervised training method can bring redundant information filtering capability to the model. During the decomposition stage, we classify and extract different features of the images and only utilize the extracted effective information in the fusion stage, and the decomposition ability of images provides a foundation for advanced visual tasks, such as image segmentation and object detection. Finally, more detailed and comprehensive fusion images are generated, and the existence of redundant information is effectively reduced. The validity of the proposed method is verified through qualitative and quantitative analysis of multiple image fusion tasks, and our algorithm gets the state-of-the-art results on multiple datasets of different image fusion tasks. The code of our fusion method is available at <span><span>https://github.com/Yukarizz/DDBFusion</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142137318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A systematic literature review of low-cost 3D mapping solutions 低成本三维制图解决方案系统文献综述
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-08-28 DOI: 10.1016/j.inffus.2024.102656
{"title":"A systematic literature review of low-cost 3D mapping solutions","authors":"","doi":"10.1016/j.inffus.2024.102656","DOIUrl":"10.1016/j.inffus.2024.102656","url":null,"abstract":"<div><p>In \"low-cost\" solutions, ensuring economic accessibility and democratizing the availability of emerging technologies stand as pivotal considerations. This study undertakes a systematic literature review of low-cost 3D mapping solutions. Leveraging SCOPUS as the primary database, a comprehensive bibliometric analysis encompassing 1380 publications was conducted, subsequently narrowing the focus to 87 recent publications for detailed review. This research endeavors to delineate the defining characteristics of low-cost systems, elucidate their principal applications and preferred platforms, assess accessibility level, gauge the extent of innovation in both hardware and software development, explore the contributions of Deep Learning and data fusion, evaluate the consideration of data quality, and examine the contemporary relevance of photogrammetry within low-cost context. The findings demonstrate that many authors subjectively use the term low-cost to highlight qualities of a technology, methodology or sensor, but challenges arise from data quality comparisons with high-cost systems.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1566253524004342/pdfft?md5=12d8720e5ffffceaabc11af13bc2d5f0&pid=1-s2.0-S1566253524004342-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Temporal-spatial-fusion-based risk assessment on the adjacent building during deep excavation 基于时空融合的深层挖掘期间邻近建筑风险评估
IF 14.7 1区 计算机科学
Information Fusion Pub Date : 2024-08-27 DOI: 10.1016/j.inffus.2024.102653
{"title":"Temporal-spatial-fusion-based risk assessment on the adjacent building during deep excavation","authors":"","doi":"10.1016/j.inffus.2024.102653","DOIUrl":"10.1016/j.inffus.2024.102653","url":null,"abstract":"<div><p>Foundation pit excavation will inevitably cause uneven ground settlement to pose potential risks to adjacent structures and infrastructures. To better perceive the risk status of adjacent buildings using multi-source information, a temporal-spatial-fusion-based risk assessment (TSFRA) model under the consideration of uncertainty and causality is developed by integrating FAHP (Fuzzy Analytic Hierarchy Process), DGDT (DAG-GNN, DEMATEL method, and topological analysis), and CM (cloud model). More specifically, FAHP handles temporal weights of excavation conditions based on expert knowledge. DGDT determines spatial weights for monitoring points, with DAG-GNN constructing causal graphs in a data-driven manner, DEMATEL handling global interactions, and topological analysis calculating node importance. CM can finally fuse the temporal (excavation conditions) and spatial (monitoring points) weights with settlement monitoring data, resulting in qualitative assessments of the overall risk status of adjacent buildings in uncertain environments. The proposed TSFRA is verified in the case of a Shanghai metro station construction project. Results indicate that the perceived risk of the project is at a relatively low level, consistent with the actual condition. High-risk excavation conditions and locations can be easily identified. There is a trend toward higher risk during the excavation of soft soil layers, and thus some risk control measures can be formulated to avoid potential risk events. In short, the weight determination and fusion method in TSFRA contribute to handling uncertainties in expert judgments and causalities in collected data, which can practically provide more reliable decision-making in excavation-induced risk assessment and control for ensuring the safety of adjacent buildings.</p></div>","PeriodicalId":50367,"journal":{"name":"Information Fusion","volume":null,"pages":null},"PeriodicalIF":14.7,"publicationDate":"2024-08-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142149324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信