Neural Networks最新文献

筛选
英文 中文
Supporting vision-language model few-shot inference with confounder-pruned knowledge prompt
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-18 DOI: 10.1016/j.neunet.2025.107173
Jiangmeng Li , Wenyi Mo , Fei Song , Chuxiong Sun , Wenwen Qiang , Bing Su , Changwen Zheng
{"title":"Supporting vision-language model few-shot inference with confounder-pruned knowledge prompt","authors":"Jiangmeng Li ,&nbsp;Wenyi Mo ,&nbsp;Fei Song ,&nbsp;Chuxiong Sun ,&nbsp;Wenwen Qiang ,&nbsp;Bing Su ,&nbsp;Changwen Zheng","doi":"10.1016/j.neunet.2025.107173","DOIUrl":"10.1016/j.neunet.2025.107173","url":null,"abstract":"<div><div>Vision-language models are pre-trained by aligning image-text pairs in a common space to deal with open-set visual concepts. Recent works adopt fixed or learnable prompts, i.e., classification weights are synthesized from natural language descriptions of task-relevant categories, to reduce the gap between tasks during the pre-training and inference phases. However, how and what prompts can improve inference performance remains unclear. In this paper, we explicitly clarify the importance of incorporating semantic information into prompts, while existing prompting methods generate prompts <em>without</em> sufficiently exploring the semantic information of textual labels. Manually constructing prompts with rich semantics requires domain expertise and is extremely time-consuming. To cope with this issue, we propose a knowledge-aware prompt learning method, namely <strong>C</strong>onfounder-<strong>p</strong>runed <strong>K</strong>nowledge <strong>P</strong>rompt (CPKP), which retrieves an ontology knowledge graph by treating the textual label as a query to extract task-relevant semantic information. CPKP further introduces a double-tier confounder-pruning procedure to refine the derived semantic information. Adhering to the individual causal effect principle, the graph-tier confounders are gradually identified and phased out. The feature-tier confounders are eliminated by following the maximum entropy principle in information theory. Empirically, the evaluations demonstrate the effectiveness of CPKP in few-shot inference, e.g., with only two shots, CPKP outperforms the manual-prompt method by 4.64% and the learnable-prompt method by 1.09% on average.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107173"},"PeriodicalIF":6.0,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143043011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
INN/ENNS/JNNS - Membership Applic. Form
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-18 DOI: 10.1016/S0893-6080(25)00050-4
{"title":"INN/ENNS/JNNS - Membership Applic. Form","authors":"","doi":"10.1016/S0893-6080(25)00050-4","DOIUrl":"10.1016/S0893-6080(25)00050-4","url":null,"abstract":"","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"Article 107171"},"PeriodicalIF":6.0,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CURRENT EVENTS
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-18 DOI: 10.1016/S0893-6080(25)00049-8
{"title":"CURRENT EVENTS","authors":"","doi":"10.1016/S0893-6080(25)00049-8","DOIUrl":"10.1016/S0893-6080(25)00049-8","url":null,"abstract":"","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"183 ","pages":"Article 107170"},"PeriodicalIF":6.0,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143139912","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LGS-KT: Integrating logical and grammatical skills for effective programming knowledge tracing
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-18 DOI: 10.1016/j.neunet.2025.107164
Xinjie Sun , Qi Liu , Kai Zhang , Shuanghong Shen , Yan Zhuang , Yuxiang Guo
{"title":"LGS-KT: Integrating logical and grammatical skills for effective programming knowledge tracing","authors":"Xinjie Sun ,&nbsp;Qi Liu ,&nbsp;Kai Zhang ,&nbsp;Shuanghong Shen ,&nbsp;Yan Zhuang ,&nbsp;Yuxiang Guo","doi":"10.1016/j.neunet.2025.107164","DOIUrl":"10.1016/j.neunet.2025.107164","url":null,"abstract":"<div><div>Knowledge tracing (KT) estimates students’ mastery of knowledge concepts or skills by analyzing their historical interactions. Although general KT methods have effectively assessed students’ knowledge states, specific measurements of students’ programming skills remain insufficient. Existing studies mainly rely on exercise outcomes and do not fully utilize behavioral data during the programming process. Therefore, we integrate a <em><strong>L</strong>ogical and <strong>G</strong>rammar <strong>S</strong>kills <strong>K</strong>nowledge <strong>T</strong>racing (<strong>LGS-KT</strong>)</em> model to enhance programming education. This model integrates static analysis and dynamic monitoring (such as CPU and memory consumption) to evaluate code elements, providing a thorough assessment of code quality. By analyzing students’ multiple iterations on the same programming problem, we constructed a reweighted logical skill evolution graph to assess the development of students’ logical skills. Additionally, to enhance the interactions among representations with similar grammatical skills, we developed a grammatical skills interaction graph based on the similarity of knowledge concepts. This approach significantly improves the accuracy of inferring students’ programming grammatical skill states. The LGS-KT model has demonstrated superior performance in predicting student outcomes. Our research highlights the potential application of a KT model that integrates logical and grammatical skills in programming exercises. To support reproducible research, we have published the data and code at <span><span>https://github.com/xinjiesun-ustc/LGS-KT</span><svg><path></path></svg></span>, encouraging further innovation in this field.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107164"},"PeriodicalIF":6.0,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143043042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DCTCNet: Sequency discrete cosine transform convolution network for visual recognition
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-18 DOI: 10.1016/j.neunet.2025.107143
Jiayong Bao, Jiangshe Zhang, Chunxia Zhang, Lili Bao
{"title":"DCTCNet: Sequency discrete cosine transform convolution network for visual recognition","authors":"Jiayong Bao,&nbsp;Jiangshe Zhang,&nbsp;Chunxia Zhang,&nbsp;Lili Bao","doi":"10.1016/j.neunet.2025.107143","DOIUrl":"10.1016/j.neunet.2025.107143","url":null,"abstract":"<div><div>The discrete cosine transform (DCT) has been widely used in computer vision tasks due to its ability of high compression ratio and high-quality visual presentation. However, conventional DCT is usually affected by the size of transform region and results in blocking effect. Therefore, eliminating the blocking effects to efficiently serve for vision tasks is significant and challenging. In this paper, we introduce All Phase Sequency DCT (APSeDCT) into convolutional networks to extract multi-frequency information of deep features. Due to the fact that APSeDCT can be equivalent to convolutional operation, we construct corresponding convolution module called APSeDCT Convolution (APSeDCTConv) that has great transferability similar to vanilla convolution. Then we propose an augmented convolutional operator called MultiConv with APSeDCTConv. By replacing the last three bottleneck blocks of ResNet with MultiConv, our approach not only reduces the computational costs and the number of parameters, but also exhibits great performance in classification, object detection and instance segmentation tasks. Extensive experiments show that APSeDCTConv augmentation leads to consistent performance improvements in image classification on ImageNet across various different models and scales, including ResNet, Res2Net and ResNext, and achieving 0.5%–1.1% and 0.4%–0.7% AP performance improvements for object detection and instance segmentation, respectively, on the COCO benchmark compared to the baseline.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107143"},"PeriodicalIF":6.0,"publicationDate":"2025-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DGMSCL: A dynamic graph mixed supervised contrastive learning approach for class imbalanced multivariate time series classification
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-17 DOI: 10.1016/j.neunet.2025.107131
Lipeng Qian , Qiong Zuo , Dahu Li , Hong Zhu
{"title":"DGMSCL: A dynamic graph mixed supervised contrastive learning approach for class imbalanced multivariate time series classification","authors":"Lipeng Qian ,&nbsp;Qiong Zuo ,&nbsp;Dahu Li ,&nbsp;Hong Zhu","doi":"10.1016/j.neunet.2025.107131","DOIUrl":"10.1016/j.neunet.2025.107131","url":null,"abstract":"<div><div>In the Imbalanced Multivariate Time Series Classification (ImMTSC) task, minority-class instances typically correspond to critical events, such as system faults in power grids or abnormal health occurrences in medical monitoring. Despite being rare and random, these events are highly significant. The dynamic spatial–temporal relationships between minority-class instances and other instances make them more prone to interference from neighboring instances during classification. Increasing the number of minority-class samples during training often results in overfitting to a single pattern of the minority class. Contrastive learning ensures that majority-class instances learn similar features in the representation space. However, it does not effectively aggregate features from neighboring minority-class instances, hindering its ability to properly represent these instances in the ImMTS dataset.</div><div>Therefor, we propose a dynamic graph-based mixed supervised contrastive learning method (DGMSCL) that effectively fits minority-class features without increasing their number, while also separating them from other instances in the representation space. First, it reconstructs the input sequence into dynamic graphs and employs a hierarchical attention graph neural network (HAGNN) to generate a discriminative embedding representation between instances. Based on this, we introduce a novel mixed contrast loss, which includes weight-augmented inter-graph supervised contrast (WAIGC) and context-based minority class-aware contrast (MCAC). It adjusts the sample weights based on their quantity and intrinsic characteristics, placing greater emphasis on minority-class loss to produce more effective gradient gains during training. Additionally, it separates minority-class instances from adjacent transitional instances in the representation space, enhancing their representational capacity.</div><div>Extensive experiments across various scenarios and datasets with differing degrees of imbalance demonstrate that DGMSCL consistently outperforms existing baseline models. Specifically, DGMSCL achieves higher overall classification accuracy, as evidenced by significantly improved average F1-score, G-mean, and kappa coefficient across multiple datasets. Moreover, classification results on a real-world power data show that DGMSCL generalizes well to real-world application.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107131"},"PeriodicalIF":6.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143043035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When low-light meets flares: Towards Synchronous Flare Removal and Brightness Enhancement
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-17 DOI: 10.1016/j.neunet.2025.107149
Jiahuan Ren , Zhao Zhang , Suiyi Zhao , Jicong Fan , Zhongqiu Zhao , Yang Zhao , Richang Hong , Meng Wang
{"title":"When low-light meets flares: Towards Synchronous Flare Removal and Brightness Enhancement","authors":"Jiahuan Ren ,&nbsp;Zhao Zhang ,&nbsp;Suiyi Zhao ,&nbsp;Jicong Fan ,&nbsp;Zhongqiu Zhao ,&nbsp;Yang Zhao ,&nbsp;Richang Hong ,&nbsp;Meng Wang","doi":"10.1016/j.neunet.2025.107149","DOIUrl":"10.1016/j.neunet.2025.107149","url":null,"abstract":"<div><div>Low-light image enhancement (LLIE) aims to improve the visibility and illumination of low-light images. However, real-world low-light images are usually accompanied with flares caused by light sources, which make it difficult to discern the content of dark images. In this case, current LLIE and nighttime flare removal methods face challenges in handling these flared low-light images effectively: (1) Flares in dark images will disturb the content of images and cause uneven lighting, potentially resulting in overexposure or chromatic aberration; (2) the slight noise in low-light images may be amplified during the process of enhancement, leading to speckle noise and blur in the enhanced images; (3) the nighttime flare removal methods usually ignore the detailed information in dark regions, which may cause inaccurate representation. To tackle the above challenges yet meaningful problems well, we propose a novel image enhancement task called Flared Low-Light Image Enhancement (FLLIE). We first synthesize several flared low-light datasets as the training/inference data, based on which we develop a novel Fourier transform-based deep FLLIE network termed Synchronous Flare Removal and Brightness Enhancement (SFRBE). Specifically, a Residual Directional Fourier Block (RDFB) is introduced that learns in the frequency domain to extract accurate global information and capture detailed features from multiple directions. Extensive experiments on three flared low-light datasets and some real flared low-light images demonstrate the effectiveness of SFRBE for FLLIE.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107149"},"PeriodicalIF":6.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143042963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On latent dynamics learning in nonlinear reduced order modeling
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-17 DOI: 10.1016/j.neunet.2025.107146
Nicola Farenga, Stefania Fresca, Simone Brivio, Andrea Manzoni
{"title":"On latent dynamics learning in nonlinear reduced order modeling","authors":"Nicola Farenga,&nbsp;Stefania Fresca,&nbsp;Simone Brivio,&nbsp;Andrea Manzoni","doi":"10.1016/j.neunet.2025.107146","DOIUrl":"10.1016/j.neunet.2025.107146","url":null,"abstract":"<div><div>In this work, we present the novel mathematical framework of <em>latent dynamics models</em> (LDMs) for reduced order modeling of parameterized nonlinear time-dependent PDEs. Our framework casts this latter task as a nonlinear dimensionality reduction problem, while constraining the latent state to evolve accordingly to an unknown dynamical system. A time-continuous setting is employed to derive error and stability estimates for the LDM approximation of the full order model (FOM) solution. We analyze the impact of using an explicit Runge–Kutta scheme in the time-discrete setting, resulting in the <span><math><mrow><mi>Δ</mi><mtext>LDM</mtext></mrow></math></span> formulation, and further explore the learnable setting, <span><math><msub><mrow><mi>Δ</mi><mtext>LDM</mtext></mrow><mrow><mi>θ</mi></mrow></msub></math></span>, where deep neural networks approximate the discrete LDM components, while providing a bounded approximation error with respect to the FOM. Moreover, we extend the concept of parameterized Neural ODE – a possible way to build data-driven dynamical systems with varying input parameters – to be a convolutional architecture, where the input parameters information is injected by means of an affine modulation mechanism, while designing a convolutional autoencoder neural network able to retain spatial-coherence, thus enhancing interpretability at the latent level. Numerical experiments, including the Burgers’ and the advection–diffusion–reaction equations, demonstrate the framework’s ability to obtain a <em>time-continuous</em> approximation of the FOM solution, thus being able to query the LDM approximation at any given time instance while retaining a prescribed level of accuracy. Our findings highlight the remarkable potential of the proposed LDMs, representing a mathematically rigorous framework to enhance the accuracy and approximation capabilities of reduced order modeling for time-dependent parameterized PDEs.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107146"},"PeriodicalIF":6.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143061436","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Out-of-Distribution Detection via outlier exposure in federated learning 联邦学习中基于离群暴露的分布外检测。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-17 DOI: 10.1016/j.neunet.2025.107141
Gu-Bon Jeong, Dong-Wan Choi
{"title":"Out-of-Distribution Detection via outlier exposure in federated learning","authors":"Gu-Bon Jeong,&nbsp;Dong-Wan Choi","doi":"10.1016/j.neunet.2025.107141","DOIUrl":"10.1016/j.neunet.2025.107141","url":null,"abstract":"<div><div>Among various out-of-distribution (OOD) detection methods in neural networks, outlier exposure (OE) using auxiliary data has shown to achieve practical performance. However, existing OE methods are typically assumed to run in a centralized manner, and thus are not feasible for a standard federated learning (FL) setting where each client has low computing power and cannot collect a variety of auxiliary samples. To address this issue, we propose a practical yet realistic OE scenario in FL where only the central server has a large amount of outlier data and a relatively small amount of in-distribution (ID) data is given to each client. For this scenario, we introduce an effective OE-based OOD detection method, called <em>internal separation &amp; backstage collaboration</em>, which makes the best use of many auxiliary outlier samples without sacrificing the ultimate goal of FL, that is, privacy preservation as well as collaborative training performance. The most challenging part is how to make the same effect in our scenario as in joint centralized training with outliers and ID samples. Our main strategy (internal separation) is to jointly train the feature vectors of an internal layer with outliers in the back layers of the global model, while ensuring privacy preservation. We also suggest an collaborative approach (backstage collaboration) where multiple back layers are trained together to detect OOD samples. Our extensive experiments demonstrate that our method shows remarkable detection performance, compared to baseline approaches in the proposed OE scenario.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107141"},"PeriodicalIF":6.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143014902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing bias in source-free unsupervised domain adaptation for regression
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-01-17 DOI: 10.1016/j.neunet.2025.107161
Qianshan Zhan , Xiao-Jun Zeng , Qian Wang
{"title":"Reducing bias in source-free unsupervised domain adaptation for regression","authors":"Qianshan Zhan ,&nbsp;Xiao-Jun Zeng ,&nbsp;Qian Wang","doi":"10.1016/j.neunet.2025.107161","DOIUrl":"10.1016/j.neunet.2025.107161","url":null,"abstract":"<div><div>Due to data privacy and storage concerns, Source-Free Unsupervised Domain Adaptation (SFUDA) focuses on improving an unlabelled target domain by leveraging a pre-trained source model without access to source data. While existing studies attempt to train target models by mitigating biases induced by noisy pseudo labels, they often lack theoretical guarantees for fully reducing biases and have predominantly addressed classification tasks rather than regression ones. To address these gaps, our analysis delves into the generalisation error bound of the target model, aiming to understand the intrinsic limitations of pseudo-label-based SFUDA methods. Theoretical results reveal that biases influencing generalisation error extend beyond the commonly highlighted label inconsistency bias, which denotes the mismatch between pseudo labels and ground truths, and the feature-label mapping bias, which represents the difference between the proxy target regressor and the real target regressor. Equally significant is the feature misalignment bias, indicating the misalignment between the estimated and real target feature distributions. This factor is frequently neglected or not explicitly addressed in current studies. Additionally, the label inconsistency bias can be unbounded in regression due to the continuous label space, further complicating SFUDA for regression tasks. Guided by these theoretical insights, we propose a Bias-Reduced Regression (BRR) method for SFUDA in regression. This method incorporates Feature Distribution Alignment (FDA) to reduce the feature misalignment bias, Hybrid Reliability Evaluation (HRE) to reduce the feature-label mapping bias and pseudo label updating to mitigate the label inconsistency bias. Experiments demonstrate the superior performance of the proposed BRR, and the effectiveness of FDA and HRE in reducing biases for regression tasks in SFUDA.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"185 ","pages":"Article 107161"},"PeriodicalIF":6.0,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143043005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信