Neural Networks最新文献

筛选
英文 中文
Event-based optical flow on neuromorphic processor: ANN vs. SNN comparison based on activation sparsification 神经形态处理器上基于事件的光流:基于激活稀疏化的ANN与SNN比较
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-07 DOI: 10.1016/j.neunet.2025.107447
Yingfu Xu , Guangzhi Tang , Amirreza Yousefzadeh , Guido C.H.E. de Croon , Manolis Sifalakis
{"title":"Event-based optical flow on neuromorphic processor: ANN vs. SNN comparison based on activation sparsification","authors":"Yingfu Xu ,&nbsp;Guangzhi Tang ,&nbsp;Amirreza Yousefzadeh ,&nbsp;Guido C.H.E. de Croon ,&nbsp;Manolis Sifalakis","doi":"10.1016/j.neunet.2025.107447","DOIUrl":"10.1016/j.neunet.2025.107447","url":null,"abstract":"<div><div>Spiking neural networks (SNNs) for event-based optical flow are claimed to be computationally more efficient than their artificial neural networks (ANNs) counterparts, but a fair comparison is missing in the literature. In this work, we propose an event-based optical flow solution based on activation sparsification and a neuromorphic processor, SENECA. SENECA has an event-driven processing mechanism that can exploit the sparsity in ANN activations and SNN spikes to accelerate the inference of both types of neural networks. The ANN and the SNN for comparison have similar low activation/spike density (<span><math><mo>∼</mo></math></span>5%) thanks to our novel sparsification-aware training. In the hardware-in-loop experiments designed to deduce the average time and energy consumption, the SNN consumes 44.9ms and 927.0<span><math><mi>μ</mi></math></span>J, which are 62.5% and 75.2% of the ANN’s consumption, respectively. We find that SNN’s higher efficiency is attributed to its lower pixel-wise spike density (43.5% <em>vs.</em> 66.5%) that requires fewer memory access operations for neuron states.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107447"},"PeriodicalIF":6.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143838015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reducing ambient noise diffusion model for underwater acoustic target 水声目标降低环境噪声扩散模型
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-07 DOI: 10.1016/j.neunet.2025.107470
Yunqi Zhang, Jiansen Hao, Qunfeng Zeng
{"title":"Reducing ambient noise diffusion model for underwater acoustic target","authors":"Yunqi Zhang,&nbsp;Jiansen Hao,&nbsp;Qunfeng Zeng","doi":"10.1016/j.neunet.2025.107470","DOIUrl":"10.1016/j.neunet.2025.107470","url":null,"abstract":"<div><div>The recognition of underwater acoustic targets is a challenging problem, and irregular ambient noise is a key factor limiting the effectiveness of the recognition. Research on diffusion models in the audio field has mainly centered around the human voice, and it may be valuable to apply them to the field of underwater acoustic. In this paper, we propose a general method for reducing ambient noise based on the diffusion model. A Decapitation normalization method is proposed, which balances the data distribution of different frequency scales and unifies the noise addition in the time and frequency domains. Then a Reducing Ambient Noise Diffusion (RAND) Model is proposed based on the diffusion model, which can effectively remove the ambient noise in a small range of steps. Considering that some steps of sampling may have a negative effect, a Three-condition mask method is proposed to make the model more robust during sampling. The effectiveness of the proposed method is verified by experiments in the time and frequency domains.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107470"},"PeriodicalIF":6.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A prompt regularization approach to enhance few-shot class-incremental learning with Two-Stage Classifier 一种快速正则化方法增强两阶段分类器的少次类增量学习
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-07 DOI: 10.1016/j.neunet.2025.107453
Meilan Hao , Yizhan Gu , Kejian Dong , Prayag Tiwari , Xiaoqing Lv , Xin Ning
{"title":"A prompt regularization approach to enhance few-shot class-incremental learning with Two-Stage Classifier","authors":"Meilan Hao ,&nbsp;Yizhan Gu ,&nbsp;Kejian Dong ,&nbsp;Prayag Tiwari ,&nbsp;Xiaoqing Lv ,&nbsp;Xin Ning","doi":"10.1016/j.neunet.2025.107453","DOIUrl":"10.1016/j.neunet.2025.107453","url":null,"abstract":"<div><div>With a limited number of labeled samples, Few-Shot Class-Incremental Learning (FSCIL) seeks to efficiently train and update models without forgetting previously learned tasks. Because pre-trained models can learn extensive feature representations from big existing datasets, they offer strong knowledge foundations and transferability, which makes them useful in both few-shot and incremental learning scenarios. Additionally, Prompt Learning improves pre-trained deep learning models’ performance on downstream tasks, particularly in large-scale language or vision models. In this paper, we propose a novel Prompt Regularization (PrRe) approach to maximize the fusion of prompts by embedding two different prompts, the Task Prompt and the Global Prompt, inside a pre-trained Vision Transformer (ViT). In the classification phase, we propose a Two-Stage Classifier (TSC), utilizing K-Nearest Neighbors for base session and a Prototype Classifier for incremental sessions, integrated with a global self-attention module. Through experiments on multiple benchmark tests, we demonstrate the effectiveness and superiority of our method. The code is available at <span><span>https://github.com/gyzzzzzzzz/PrRe</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107453"},"PeriodicalIF":6.0,"publicationDate":"2025-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143817120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-Referencing Agents for Unsupervised Reinforcement Learning 无监督强化学习的自参考智能体
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-05 DOI: 10.1016/j.neunet.2025.107448
Andrew Zhao , Erle Zhu , Rui Lu , Matthieu Lin , Yong-Jin Liu , Gao Huang
{"title":"Self-Referencing Agents for Unsupervised Reinforcement Learning","authors":"Andrew Zhao ,&nbsp;Erle Zhu ,&nbsp;Rui Lu ,&nbsp;Matthieu Lin ,&nbsp;Yong-Jin Liu ,&nbsp;Gao Huang","doi":"10.1016/j.neunet.2025.107448","DOIUrl":"10.1016/j.neunet.2025.107448","url":null,"abstract":"<div><div>Current unsupervised reinforcement learning methods often overlook reward nonstationarity during pre-training and the forgetting of exploratory behavior during fine-tuning. Our study introduces Self-Reference (SR), a novel add-on module designed to address both issues. SR stabilizes intrinsic rewards through historical referencing in pre-training, mitigating nonstationarity. During fine-tuning, it preserves exploratory behaviors, retaining valuable skills. Our approach significantly boosts the performance and sample efficiency of existing URL model-free methods on the Unsupervised Reinforcement Learning Benchmark, improving IQM by up to 17% and reducing the Optimality Gap by 31%. This highlights the general applicability and compatibility of our add-on module with existing methods.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107448"},"PeriodicalIF":6.0,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143785110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A fractional-order multi-delayed bicyclic crossed neural network: Stability, bifurcation, and numerical solution 分数阶多延迟双环交叉神经网络:稳定性、分岔及数值解
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-05 DOI: 10.1016/j.neunet.2025.107436
Pushpendra Kumar , Tae H. Lee , Vedat Suat Erturk
{"title":"A fractional-order multi-delayed bicyclic crossed neural network: Stability, bifurcation, and numerical solution","authors":"Pushpendra Kumar ,&nbsp;Tae H. Lee ,&nbsp;Vedat Suat Erturk","doi":"10.1016/j.neunet.2025.107436","DOIUrl":"10.1016/j.neunet.2025.107436","url":null,"abstract":"<div><div>In this paper, we propose a fractional-order bicyclic crossed neural network (NN) with multiple time delays consisting of two sharing neurons between rings. The given fractional-order NN is defined in terms of the Caputo fractional derivatives. We prove boundedness and the existence of a unique solution for the proposed NN. We do the stability and the onset of Hopf bifurcation analyses by converting the proposed multiple-delayed NN into a single-delay NN. Later, we numerically solve the proposed NN with the help of the L1 predictor–corrector algorithm and justify the theoretical results with graphical simulations. We explore that the time delay and the order of the derivative both influence the stability and bifurcation of the fractional-order NN. The proposed fractional-order NN is a unique multi-delayed bicyclic crossover NN that has two sharing neurons between rings. Such ring structure appropriately mimics the information transmission process within intricate NNs.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107436"},"PeriodicalIF":6.0,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143833251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploration and exploitation in continual learning 不断学习中的探索和利用
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-05 DOI: 10.1016/j.neunet.2025.107444
Kiseong Hong , Hyundong Jin , Sungho Suh , Eunwoo Kim
{"title":"Exploration and exploitation in continual learning","authors":"Kiseong Hong ,&nbsp;Hyundong Jin ,&nbsp;Sungho Suh ,&nbsp;Eunwoo Kim","doi":"10.1016/j.neunet.2025.107444","DOIUrl":"10.1016/j.neunet.2025.107444","url":null,"abstract":"<div><div>Continual learning (CL) has received a surge of interest, particularly in parameter isolation approaches, aiming to prevent catastrophic forgetting by assigning a disjoint parameter set to each task. Despite their effectiveness, existing approaches often neglect the task-specific differences, depending on predetermined allocation ratios of parameters. This can lead to suboptimal performance as it disregards the unique requirements of individual task traits. In this paper, we propose a novel <em>Exploration–Exploitation</em> approach to address this issue. Our goal is to adaptively distribute resources between acquiring new information (Exploration) and retaining previously learned knowledge (Exploitation) as new tasks emerge. This allows a continual learner to adaptively allocate parameters for every consecutive task by enabling them to compete for resources using exploration and exploitation. To achieve this, we introduce an allocation learner that learns the intricate interplay between exploration and exploitation across all layers of the continual learner. We demonstrate the proposed method under popular image classification benchmarks for diverse CL scenarios, including domain-shift task-incremental learning. Experimental results show that the proposed method outperforms other competitive continual learning approaches with an average margin of 5.3% across all scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107444"},"PeriodicalIF":6.0,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143785111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GDVIFNet: A generated depth and visible image fusion network with edge feature guidance for salient object detection GDVIFNet:一种具有边缘特征引导的用于显著目标检测的生成深度和可见图像融合网络
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-05 DOI: 10.1016/j.neunet.2025.107445
Xiaogang Song , Yuping Tan , Xiaochang Li , Xinhong Hei
{"title":"GDVIFNet: A generated depth and visible image fusion network with edge feature guidance for salient object detection","authors":"Xiaogang Song ,&nbsp;Yuping Tan ,&nbsp;Xiaochang Li ,&nbsp;Xinhong Hei","doi":"10.1016/j.neunet.2025.107445","DOIUrl":"10.1016/j.neunet.2025.107445","url":null,"abstract":"<div><div>In recent years, despite significant advancements in salient object detection (SOD), performance in complex interference environments remains suboptimal. To address these challenges, additional modalities like depth (SOD-D) or thermal imaging (SOD-T) are often introduced. However, existing methods typically rely on specialized depth or thermal devices to capture these modalities, which can be costly and inconvenient. To address this limitation using only a single RGB image, we propose GDVIFNet, a novel approach that leverages Depth Anything to generate depth images. Since these generated depth images may contain noise and artifacts, we incorporate self-supervised techniques to generate edge feature information. During the process of generating image edge features, the noise and artifacts present in the generated depth images can be effectively removed. Our method employs a dual-branch architecture, combining CNN and Transformer-based branches for feature extraction. We designed the step trimodal interaction unit (STIU) to fuse the RGB features with the depth features from the CNN branch and the self-cross attention fusion (SCF) to integrate RGB features with depth features from the Transformer branch. Finally, guided by edge features from our self-supervised edge guidance module (SEGM), we employ the CNN-Edge-Transformer step fusion (CETSF) to fuse features from both branches. Experimental results demonstrate that our method achieves state-of-the-art performance across multiple datasets. Code can be found at <span><span>https://github.com/typist2001/GDVIFNet</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107445"},"PeriodicalIF":6.0,"publicationDate":"2025-04-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143799582","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring quantum neural networks for binary classification on MNIST dataset: A swap test approach 在MNIST数据集上探索量子神经网络的二元分类:交换测试方法
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-04 DOI: 10.1016/j.neunet.2025.107442
Kehan Chen, Jiaqi Liu, Fei Yan
{"title":"Exploring quantum neural networks for binary classification on MNIST dataset: A swap test approach","authors":"Kehan Chen,&nbsp;Jiaqi Liu,&nbsp;Fei Yan","doi":"10.1016/j.neunet.2025.107442","DOIUrl":"10.1016/j.neunet.2025.107442","url":null,"abstract":"<div><div>In this study, we propose a novel modularized Quantum Neural Network (mQNN) model tailored to address the binary classification problem on the MNIST dataset. The mQNN organizes input information using quantum images and trainable quantum parameters encoded in superposition states. Leveraging quantum parallelism, the model efficiently processes inner product calculations of quantum neurons via the swap test, achieving constant complexity. To enhance the expressive capacity of the mQNN, nonlinear transformations, specifically quantum versions of activation functions, are integrated into the quantum network. The mQNN’s circuits are constructed from flexible quantum modules, allowing the model to adapt its structure based on varying input data types and scales for optimal performance. Furthermore, rigorous mathematical derivations are employed to validate the quantum state evolution during computation within a quantum neuron. Testing on the Pennylane platform simulates the quantum environment and confirms the mQNN’s effectiveness on the MNIST dataset. These findings highlight the potential of quantum computing in advancing image classification tasks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107442"},"PeriodicalIF":6.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143807970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Boosting semi-supervised federated learning by effectively exploiting server-side knowledge and client-side unconfident samples 通过有效利用服务器端知识和客户端无把握样本,提升半监督联合学习能力
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-04 DOI: 10.1016/j.neunet.2025.107440
Hongquan Liu , Yuxi Mi , Yateng Tang , Jihong Guan , Shuigeng Zhou
{"title":"Boosting semi-supervised federated learning by effectively exploiting server-side knowledge and client-side unconfident samples","authors":"Hongquan Liu ,&nbsp;Yuxi Mi ,&nbsp;Yateng Tang ,&nbsp;Jihong Guan ,&nbsp;Shuigeng Zhou","doi":"10.1016/j.neunet.2025.107440","DOIUrl":"10.1016/j.neunet.2025.107440","url":null,"abstract":"<div><div>Semi-supervised federated learning (SSFL) has emerged as a promising paradigm to reduce the need for fully labeled data in training federated learning (FL) models. This paper focuses on the label-at-server scenario, where clients’ data are entirely unlabeled and the server possesses only a limited amount of labeled data. In this setting, the non-independent and identically distributed (non-IID) local data and the incorrect pseudo-labels will possibly introduce bias into the model during local training. Prior works try to alleviate the bias by fine-tuning the global model with clean labeled data, ignoring explicitly leveraging server-side knowledge to guide local training. Additionally, existing methods typically discard samples with unconfident pseudo-labels, resulting in many samples being not used, consequently suboptimal performance and slow convergence. This paper introduces a novel method to enhance SSFL performance by effectively exploiting server-side clean knowledge and client-side unconfident samples. Specifically, we propose a representation alignment module that mitigates the influence of non-IID data by aligning local features with the <em>class proxies</em> of the server labeled data. Furthermore, we employ a shrink loss to reduce the risk associated with unreliable pseudo-labels, ensuring the exploitation of valuable information contained in the entire unlabeled dataset. Extensive experiments on five benchmark datasets under various settings demonstrate the effectiveness and generality of the proposed method, which not only outperforms existing methods but also reduces the communication cost required to achieve the target performance.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107440"},"PeriodicalIF":6.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143821524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FSDM: An efficient video super-resolution method based on Frames-Shift Diffusion Model FSDM:一种基于帧移扩散模型的高效视频超分辨方法
IF 6 1区 计算机科学
Neural Networks Pub Date : 2025-04-03 DOI: 10.1016/j.neunet.2025.107435
Shijie Yang , Chao Chen , Jie Liu , Jie Tang , Gangshan Wu
{"title":"FSDM: An efficient video super-resolution method based on Frames-Shift Diffusion Model","authors":"Shijie Yang ,&nbsp;Chao Chen ,&nbsp;Jie Liu ,&nbsp;Jie Tang ,&nbsp;Gangshan Wu","doi":"10.1016/j.neunet.2025.107435","DOIUrl":"10.1016/j.neunet.2025.107435","url":null,"abstract":"<div><div>Video super-resolution is a fundamental task aimed at enhancing video quality through intricate modeling techniques. Recent advancements in diffusion models have significantly enhanced image super-resolution processing capabilities. However, their integration into video super-resolution workflows remains constrained due to the computational complexity of temporal fusion modules, demanding more computational resources compared to their image counterparts. To address this challenge, we propose a novel approach: a Frames-Shift Diffusion Model based on the image diffusion models. Compared to directly training diffusion-based video super-resolution models, redesigning the diffusion process of image models without introducing complex temporal modules requires minimal training consumption. We incorporate temporal information into the image super-resolution diffusion model by using optical flow and perform multi-frame fusion. This model adapts the diffusion process to smoothly transition from image super-resolution to video super-resolution diffusion without additional weight parameters. As a result, the Frames-Shift Diffusion Model efficiently processes videos frame by frame while maintaining computational efficiency and achieving superior performance. It enhances perceptual quality and achieves comparable performance to other state-of-the-art diffusion-based VSR methods in PSNR and SSIM. This approach optimizes video super-resolution by simplifying the integration of temporal data, thus addressing key challenges in the field.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"188 ","pages":"Article 107435"},"PeriodicalIF":6.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143769264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信