Neurocomputing最新文献

筛选
英文 中文
Stability analysis of inertial delayed neural network with delayed impulses via dynamic event-triggered impulsive control
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-04 DOI: 10.1016/j.neucom.2025.129573
Mengyao Shi , Lulu Li , Jinde Cao , Liang Hua , Mahmoud Abdel-Aty
{"title":"Stability analysis of inertial delayed neural network with delayed impulses via dynamic event-triggered impulsive control","authors":"Mengyao Shi ,&nbsp;Lulu Li ,&nbsp;Jinde Cao ,&nbsp;Liang Hua ,&nbsp;Mahmoud Abdel-Aty","doi":"10.1016/j.neucom.2025.129573","DOIUrl":"10.1016/j.neucom.2025.129573","url":null,"abstract":"<div><div>This paper investigates the stability of inertial delayed neural network under dynamic event-triggered impulsive control (DETIC). We innovate by generating the impulsive sequence through DETIC and incorporating impulsive delays, thereby enhancing the model’s practical relevance. Our methodology involves a two-step process: first, we transform the inertial neural network into a first-order differential form using appropriate vector transformations. Then, leveraging Lyapunov-based dynamic event-triggered control, we derive sufficient conditions for both uniform stability and uniform asymptotic stability of the system. To ensure practical applicability, we establish specific parameter constraints for the DETIC mechanism that precludes the Zeno phenomenon. To demonstrate the accuracy and efficacy of our theoretical results, we present two simulation examples.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129573"},"PeriodicalIF":5.5,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143314283","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving face attribute classification via differential privacy
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-04 DOI: 10.1016/j.neucom.2025.129556
Xiaoting Zhang , Tao Wang , Junhao Ji , Yushu Zhang , Rushi Lan
{"title":"Privacy-preserving face attribute classification via differential privacy","authors":"Xiaoting Zhang ,&nbsp;Tao Wang ,&nbsp;Junhao Ji ,&nbsp;Yushu Zhang ,&nbsp;Rushi Lan","doi":"10.1016/j.neucom.2025.129556","DOIUrl":"10.1016/j.neucom.2025.129556","url":null,"abstract":"<div><div>The development of face attribute recognition technology has enhanced the intelligence capabilities in the retail industry. Merchants use the surveillance system to capture customers’ face images, and analyze their basic characteristics to provide accurate product recommendations and optimize product configurations. However, these captured face images may contain sensitive visual information, especially identity-related data, which could lead to potential security and privacy risks. Current methods for face privacy protection cannot fully support privacy preserving face attributes classification. To this end, this paper proposes a privacy protection scheme that employs differential privacy in the frequency domain to mitigate risks in face attribute classification systems. Our main goal is to take the frequency domain features perturbed with differential privacy as the input of the face attribute classification model to resist privacy attacks. Specifically, the proposed scheme first transforms the original face image into the frequency domain using the discrete cosine transform (DCT) and removes the DC components that contain the visual information. Then the privacy budget allocation in the differential privacy framework is optimized based on the loss of the face attribute classification network. Finally, the corresponding differential privacy noise is added to the frequency representation. The utilization of differential privacy theoretically provides privacy guarantees. Sufficient experimental results show that the proposed scheme can well balance the privacy-utility.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129556"},"PeriodicalIF":5.5,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143314457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A contrastive learning strategy for optimizing node non-alignment in dynamic community detection
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-04 DOI: 10.1016/j.neucom.2025.129548
Xiaohong Li, Wanyao Shi, Qixuan Peng, Hongyan Ran
{"title":"A contrastive learning strategy for optimizing node non-alignment in dynamic community detection","authors":"Xiaohong Li,&nbsp;Wanyao Shi,&nbsp;Qixuan Peng,&nbsp;Hongyan Ran","doi":"10.1016/j.neucom.2025.129548","DOIUrl":"10.1016/j.neucom.2025.129548","url":null,"abstract":"<div><div>Dynamic community detection, which focuses on tracking local topological variation with time, is crucial for understanding the changing affiliations of nodes to communities in complex networks. Existing researches fell short of expectations primarily due to their heavy reliance on clustering methods or evolutionary algorithms. The emergence of graph contrastive learning offers us a novel perspective and inspiration, which performed well in recognizing pattern at both the node-node and node-graph levels. However, there are still the following limitations in practice: (i) conventional data augmentations may undermine task-relevant information by bring in invalid views or false positive samples, leading the model toward weak discriminative representations. (ii) the non-alignment of nodes caused by dynamic changes also limits the expressive ability of GCL. In this paper, we propose a <strong>C</strong>ontrastive <strong>L</strong>earning strategy for <strong>O</strong>ptimizing <strong>N</strong>ode non-alignment in <strong>D</strong>ynamic Community Detection (<strong>CL-OND</strong>). Initially, we confirm the viability of utilizing dynamic adjacent snapshots as monitoring signals through graph spectral experiments, which eliminates the dependence of contrastive learning on traditional data augmentations. Subsequently, we construct an end-to-end dynamic community detection model and introduce a non-aligned neighbor contrastive loss to capture temporal properties and inherent structure of evolutionary graphs by constructing positive and negative samples. Furthermore, extensive experimental results demonstrate that our approach consistently outperforms others in terms of performance.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129548"},"PeriodicalIF":5.5,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DKETFormer: Salient object detection in optical remote sensing images based on discriminative knowledge extraction and transfer
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-04 DOI: 10.1016/j.neucom.2025.129558
Yuze Sun, Hongwei Zhao, Jianhang Zhou
{"title":"DKETFormer: Salient object detection in optical remote sensing images based on discriminative knowledge extraction and transfer","authors":"Yuze Sun,&nbsp;Hongwei Zhao,&nbsp;Jianhang Zhou","doi":"10.1016/j.neucom.2025.129558","DOIUrl":"10.1016/j.neucom.2025.129558","url":null,"abstract":"<div><div>Generally, most methods for salient object detection in optical remote sensing images (ORSI-SOD) are based on convolutional neural networks (CNNs). However, CNNs, due to their architectural characteristics, can only encode local semantic information, which leads to a lack of exploration of discriminative features on a large scale. Therefore, to encode the long-term dependency within the detection image, enhance the extraction of discriminative knowledge, and transfer it at multiple scales, we introduce a Transformer architecture called DKETFormer. Specifically, DKETFormer utilizes the Transformer backbone to obtain multi-scale feature maps that have encoded long-term dependency relationships. Then, it constructs a decoder using the Cross-spatial Knowledge Extraction Module (CKEM) and the Inter-layer Feature Transfer Module (IFTM). The CKEM is capable of extracting discriminative information across receptive fields while preserving knowledge from each channel. It also utilizes global information encoding to calibrate channel weights, resulting in improved knowledge aggregation and capturing of pixel-level pairwise relationships. The IFTM utilizes encoded and extracted information from the backbone and CKEM, employing a self-attention mechanism with cosine similarity knowledge to model and propagate discriminative features. Finally, we generated the final detection map using a salient object detector. The results of comparative experiments and ablation experiments demonstrate the effectiveness of the proposed DKETFormer and its internal modules.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"625 ","pages":"Article 129558"},"PeriodicalIF":5.5,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143210303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MW-FixMatch: A class imbalance semi-supervised learning algorithm based on re-weighting
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-04 DOI: 10.1016/j.neucom.2025.129385
Xiaoqing Zheng, Weijie Hong, Dengde Chen, Anke Xue, Yaguang Kong
{"title":"MW-FixMatch: A class imbalance semi-supervised learning algorithm based on re-weighting","authors":"Xiaoqing Zheng,&nbsp;Weijie Hong,&nbsp;Dengde Chen,&nbsp;Anke Xue,&nbsp;Yaguang Kong","doi":"10.1016/j.neucom.2025.129385","DOIUrl":"10.1016/j.neucom.2025.129385","url":null,"abstract":"<div><div>Semi-supervised learning for image classification is an important research area in computer vision. These algorithms typically assume that both labeled and unlabeled datasets are class-balanced and share the same distribution. However, when there is an imbalance in the class distribution, it can significantly affect their performance. To address this issue, we propose MW-FixMatch, a novel approach that better adjusts the semi-supervised learning process in the presence of class imbalance. It utilizes a weight network to balance the contribution of labeled and unlabeled data, and the parameters of this network are learned from a class-balanced sampled set. We tested our approach on several publicly available image datasets with class imbalance and consistently achieved superior results across multiple experiments.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129385"},"PeriodicalIF":5.5,"publicationDate":"2025-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143372431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SPADesc: Semantic and parallel attention with feature description
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-03 DOI: 10.1016/j.neucom.2025.129567
Haijun Meng , Huimin Lu , Bozhi Ding , Qiangchang Wang
{"title":"SPADesc: Semantic and parallel attention with feature description","authors":"Haijun Meng ,&nbsp;Huimin Lu ,&nbsp;Bozhi Ding ,&nbsp;Qiangchang Wang","doi":"10.1016/j.neucom.2025.129567","DOIUrl":"10.1016/j.neucom.2025.129567","url":null,"abstract":"<div><div>Local feature detection and description are essential preliminary tasks in a multitude of computer vision applications. Despite the prowess of deep neural networks in feature extraction, they still grapple with challenges in capturing globally invariant and robust features, especially in dynamic scenes and areas with simplistic and repetitive geometric structures. This paper introduces a multi-scale feature fusion framework, SPADesc, which addresses these challenges by leveraging dynamic weighted fusion (DWF) and semantic priors. We integrate convolutional and self-attention mechanisms to bolster local feature detection and description in complex environments. Our approach employs a Parallel Convolution and Attention (PCA) module to generate descriptors that encompass both local and global scales. Additionally, a Semantic-Guided (SG) module is employed to produce class-aware global mask information, which implicitly guides the selection of keypoints and descriptors. By incorporating a Semantically Weighted (SW) loss function, we enhance the robustness and discriminative power of the descriptors. Extensive experimental results across various visual tasks demonstrate significant performance improvements, highlighting the superior adaptability and precision of our proposed model. The code is available at <span><span>https://github.com/Diffcc/SPADesc</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"625 ","pages":"Article 129567"},"PeriodicalIF":5.5,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143210304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MABQN: Multi-agent reinforcement learning algorithm with discrete policy
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-03 DOI: 10.1016/j.neucom.2025.129552
Qing Xie , Zicheng Wang , Yuyuan Fang , Yukai Li
{"title":"MABQN: Multi-agent reinforcement learning algorithm with discrete policy","authors":"Qing Xie ,&nbsp;Zicheng Wang ,&nbsp;Yuyuan Fang ,&nbsp;Yukai Li","doi":"10.1016/j.neucom.2025.129552","DOIUrl":"10.1016/j.neucom.2025.129552","url":null,"abstract":"<div><div>Cooperative multi-agent reinforcement learning (MARL) for continuous control has diverse applications in real-world scenarios. Most of those MARL algorithms focus on enhancing performance through policy-based paradigms, the challenge of low sample efficiency caused by the continuous nature remains underexplored. To address this issue, we propose the <strong>M</strong>ulti-<strong>A</strong>gent <strong>B</strong>ranching <strong>Q</strong>-<strong>N</strong>etworks (MABQN) algorithm, an improved QMIX architecture integrating action discretization and value decomposition. MABQN reduces the policy search space by progressively discretizing the continuous action space and decoupling action dimensions, thereby improving learning efficiency. Moreover, it employs a centralized hypernetwork to decompose joint action values, mitigating the credit assignment problem. Experimental results demonstrate that MABQN outperforms other mainstream cooperative MARL algorithms across continuous, discrete, and hybrid action space tasks.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129552"},"PeriodicalIF":5.5,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
GCTGNN: A forecasting method for time series based on graph neural networks and graph clustering
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-03 DOI: 10.1016/j.neucom.2025.129544
Xin Liu , Yapeng Meng , Feng Chen , Dengjian Qiao , Fan Wu
{"title":"GCTGNN: A forecasting method for time series based on graph neural networks and graph clustering","authors":"Xin Liu ,&nbsp;Yapeng Meng ,&nbsp;Feng Chen ,&nbsp;Dengjian Qiao ,&nbsp;Fan Wu","doi":"10.1016/j.neucom.2025.129544","DOIUrl":"10.1016/j.neucom.2025.129544","url":null,"abstract":"<div><div>Graph structure can better extract and represent the complex spatio-temporal relationships among multiple time series in forecasting tasks. Their accurate prediction, however, remains a challenge if each forecasting target is not only affected by the overall trend of the graph structure but also by the strong correlation among its local neighborhood nodes as the time series evolve. In this paper, we propose a forecasting method for time series based on Graph Clustering and Graph Neural Networks, named as the GCTGNN model. GCTGNN firstly clusters nodes of strong correlations within the graph’s local neighborhood, deriving multiple sub-graphs. Subsequently, it employs a graph neural network to extract the nodes’ feature information within each sub-graph. Then, a two-layer time series forecasting structure is introduced. The first layer learns the local change trend of each sub-graph over time which are then fused across sub-graphs through an attention mechanism, thereby deriving the global change features. The second layer then propagates these fused global features to each node within each sub-graph to obtain the forecasting results. Experimental results on different datasets show that GCTGNN outperforms other baseline models in the task of complex graph time series forecasting where local neighborhood nodes have strong correlations.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129544"},"PeriodicalIF":5.5,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143372434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning semantical dynamics and spatiotemporal collaboration for human pose estimation in video
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-03 DOI: 10.1016/j.neucom.2025.129581
Runyang Feng , Haoming Chen
{"title":"Learning semantical dynamics and spatiotemporal collaboration for human pose estimation in video","authors":"Runyang Feng ,&nbsp;Haoming Chen","doi":"10.1016/j.neucom.2025.129581","DOIUrl":"10.1016/j.neucom.2025.129581","url":null,"abstract":"<div><div>Temporal modeling and spatio-temporal collaboration are pivotal techniques for video-based human pose estimation. Most state-of-the-art methods adopt optical flow or temporal difference, learning local visual content correspondence across frames at the pixel level, to capture motion dynamics. However, such a paradigm essentially relies on localized pixel-to-pixel similarity, which neglects the <em>semantical correlations</em> among frames and is vulnerable to image quality degradations (<em>e.g.</em> occlusions or blur). Moreover, existing approaches often combine motion and spatial (appearance) features via simple concatenation or summation, leading to practical challenges in fully leveraging these distinct modalities. In this paper, we present a novel framework that learns multi-level semantical dynamics and dense spatio-temporal collaboration for multi-frame human pose estimation. Specifically, we first design a Multi-Level Semantic Motion Encoder using a multi-masked context and pose reconstruction strategy. This strategy stimulates the model to explore multi-granularity spatiotemporal semantic relationships among frames by progressively masking the features of (patch) cubes and frames. We further introduce a Spatial-Motion Mutual Learning module which densely propagates and consolidates context information from spatial and motion features to enhance the capability of the model. Extensive experiments demonstrate that our approach sets new state-of-the-art results on three benchmark datasets, PoseTrack2017, PoseTrack2018, and PoseTrack21.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129581"},"PeriodicalIF":5.5,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143348778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coarse-to-fine text injecting for realistic image super-resolution
IF 5.5 2区 计算机科学
Neurocomputing Pub Date : 2025-02-03 DOI: 10.1016/j.neucom.2025.129591
Xiaoyu Chen , Chao Bai , Zhenyao Wu , Xinyi Wu , Qi Zou , Yong Xia , Song Wang
{"title":"Coarse-to-fine text injecting for realistic image super-resolution","authors":"Xiaoyu Chen ,&nbsp;Chao Bai ,&nbsp;Zhenyao Wu ,&nbsp;Xinyi Wu ,&nbsp;Qi Zou ,&nbsp;Yong Xia ,&nbsp;Song Wang","doi":"10.1016/j.neucom.2025.129591","DOIUrl":"10.1016/j.neucom.2025.129591","url":null,"abstract":"<div><div>Image Super-Resolution (ISR) aims at enhancing the resolution of a given image by guessing the RGB values of additional pixels from existing ones. Equipping with the pre-trained text-to-image diffusion models, <em>i.e.</em> Stable Diffusion (SD), and fine-tuning it for image-to-image purposes, recent ISR methods are able to leverage powerful prior knowledge to generate abundant details from the low-resolution observations. They abolished the capability of using precise text to control image generation and thus might lose the prior can be queried via text for ISR. In this work, we propose a plug-and-play text-injecting strategy that capitalizes on the inherent benefits of text guidance from a pre-trained SD model. Specifically, two levels of text prompts, including general and detailed descriptions, are automatically generated via existing large language models. By introducing a time-aware text injector, we are able to inject text features with varying granularity to generate progressively detailed results as diffusion unfolds. Based on the observation that the stochastic nature of the diffusion model may lead to different reconstruction results, we develop two test-time methods that conduct multiple runs with different initial noises to refine the fidelity or realism of reconstructed images. Additionally, we explore the interpretability of our method by visualizing the cross-attention maps in the denoising U-Net, which show an obvious correspondence between semantically meaningful words in the text prompt and the corresponding image features. Extensive experiments demonstrated the superiority of our method in balancing fidelity and realism over current state-of-the-art approaches.</div></div>","PeriodicalId":19268,"journal":{"name":"Neurocomputing","volume":"626 ","pages":"Article 129591"},"PeriodicalIF":5.5,"publicationDate":"2025-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143387390","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信