Neural Networks最新文献

筛选
英文 中文
A multiscale distributed neural computing model database (NCMD) for neuromorphic architecture 神经形态架构的多尺度分布式神经计算模型数据库 (NCMD)
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-10 DOI: 10.1016/j.neunet.2024.106727
{"title":"A multiscale distributed neural computing model database (NCMD) for neuromorphic architecture","authors":"","doi":"10.1016/j.neunet.2024.106727","DOIUrl":"10.1016/j.neunet.2024.106727","url":null,"abstract":"<div><p>Distributed neuromorphic architecture is a promising technique for on-chip processing of multiple tasks. Deploying the constructed model in a distributed neuromorphic system, however, remains time-consuming and challenging due to considerations such as network topology, connection rules, and compatibility with multiple programming languages. We proposed a multiscale distributed neural computing model database (NCMD), which is a framework designed for ARM-based multi-core hardware. Various neural computing components, including ion channels, synapses, and neurons, are encompassed in NCMD. We demonstrated how NCMD constructs and deploys multi-compartmental detailed neuron models as well as spiking neural networks (SNNs) in BrainS, a distributed multi-ARM neuromorphic system. We demonstrated that the electrodiffusive Pinsky–Rinzel (edPR) model developed by NCMD is well-suited for BrainS. All dynamic properties, such as changes in membrane potential and ion concentrations, can be easily explored. In addition, SNNs constructed by NCMD can achieve an accuracy of 86.67% on the test set of the Iris dataset. The proposed NCMD offers an innovative approach to applying BrainS in neuroscience, cognitive decision-making, and artificial intelligence research.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243602","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multistability and fixed-time multisynchronization of switched neural networks with state-dependent switching rules 具有状态相关切换规则的切换神经网络的多稳定性和固定时间多同步性
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106713
{"title":"Multistability and fixed-time multisynchronization of switched neural networks with state-dependent switching rules","authors":"","doi":"10.1016/j.neunet.2024.106713","DOIUrl":"10.1016/j.neunet.2024.106713","url":null,"abstract":"<div><p>This paper presents theoretical results on the multistability and fixed-time synchronization of switched neural networks with multiple almost-periodic solutions and state-dependent switching rules. It is shown herein that the number, location, and stability of the almost-periodic solutions of the switched neural networks can be characterized by making use of the state-space partition. Two sets of sufficient conditions are derived to ascertain the existence of <span><math><msup><mrow><mn>3</mn></mrow><mrow><mi>n</mi></mrow></msup></math></span> exponentially stable almost-periodic solutions. Subsequently, this paper introduces the novel concept of fixed-time multisynchronization in switched neural networks associated with a range of almost-periodic parameters within multiple stable equilibrium states for the first time. Based on the multistability results, it is demonstrated that there are <span><math><msup><mrow><mn>3</mn></mrow><mrow><mi>n</mi></mrow></msup></math></span> synchronization manifolds, wherein <span><math><mi>n</mi></math></span> is the number of neurons. Additionally, an estimation for the settling time required for drive–response switched neural networks to achieve synchronization is provided. It should be noted that this paper considers stable equilibrium points (static multisynchronization), stable almost-periodic orbits (dynamical multisynchronization), and hybrid stable equilibrium states (hybrid multisynchronization) as special cases of multistability (multisynchronization). Two numerical examples are elaborated to substantiate the theoretical results.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142167632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intermediate-grained kernel elements pruning with structured sparsity 利用结构稀疏性修剪中间粒度内核元素
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106708
{"title":"Intermediate-grained kernel elements pruning with structured sparsity","authors":"","doi":"10.1016/j.neunet.2024.106708","DOIUrl":"10.1016/j.neunet.2024.106708","url":null,"abstract":"<div><p>Neural network pruning provides a promising prospect for the deployment of neural networks on embedded or mobile devices with limited resources. Although current structured strategies are unconstrained by specific hardware architecture in the phase of forward inference, the decline in classification accuracy of structured methods is beyond the tolerance at the level of general pruning rate. This inspires us to develop a technique that satisfies high pruning rate with a small decline in accuracy and has the general nature of structured pruning. In this paper, we propose a new pruning method, namely KEP (Kernel Elements Pruning), to compress deep convolutional neural networks by exploring the significance of elements in each kernel plane and removing unimportant elements. In this method, we apply a controllable regularization penalty to constrain unimportant elements by adding a prior knowledge mask and obtain a compact model. In the calculation procedure of forward inference, we introduce a sparse convolution operation which is different from the sliding window to eliminate invalid zero calculations and verify the effectiveness of the operation for further deployment on FPGA. A massive variety of experiments demonstrate the effectiveness of KEP on two datasets: CIFAR-10 and ImageNet. Specially, with few indexes of non-zero weights introduced, KEP has a significant improvement over the latest structured methods in terms of parameter and float-point operation (FLOPs) reduction, and performs well on large datasets.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BGAT-CCRF: A novel end-to-end model for knowledge graph noise correction BGAT-CCRF:用于知识图谱噪声校正的新型端到端模型
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106715
{"title":"BGAT-CCRF: A novel end-to-end model for knowledge graph noise correction","authors":"","doi":"10.1016/j.neunet.2024.106715","DOIUrl":"10.1016/j.neunet.2024.106715","url":null,"abstract":"<div><p>Knowledge graph (KG) noise correction aims to select suitable candidates to correct the noises in KGs. Most of the existing studies have limited performance in repairing the noisy triple that contains more than one incorrect entity or relation, which significantly constrains their implementation in real-world KGs. To overcome this challenge, we propose a novel end-to-end model (BGAT-CCRF) that achieves better noise correction results. Specifically, we construct a <u>b</u>alanced-based <u>g</u>raph <u>at</u>tention model (BGAT) to learn the features of nodes in triples’ neighborhoods and capture the correlation between nodes based on their position and frequency. Additionally, we design a <u>c</u>onstrained <u>c</u>onditional <u>r</u>andom <u>f</u>ield model (CCRF) to select suitable candidates guided by three constraints for correcting one or more noises in the triple. In this way, BGAT-CCRF can select multiple candidates from a smaller domain to repair multiple noises in triples simultaneously, rather than selecting candidates from the whole KG to repair noisy triples as traditional methods do, which can only repair one noise in the triple at a time. The effectiveness of BGAT-CCRF is validated by the KG noise correction experiment. Compared with the state-of-the-art models, BGAT-CCRF improves the fMRR metric by 3.58% on the FB15K dataset. Hence, it has the potential to facilitate the implementation of KGs in the real world.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142232586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring refined dual visual features cross-combination for image captioning 探索图像字幕的精制双视觉特征交叉组合
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106710
{"title":"Exploring refined dual visual features cross-combination for image captioning","authors":"","doi":"10.1016/j.neunet.2024.106710","DOIUrl":"10.1016/j.neunet.2024.106710","url":null,"abstract":"<div><p>For current image caption tasks used to encode region features and grid features Transformer-based encoders have become commonplace, because of their multi-head self-attention mechanism, the encoder can better capture the relationship between different regions in the image and contextual information. However, stacking Transformer blocks necessitates quadratic computation through self-attention to visual features, not only resulting in the computation of numerous redundant features but also significantly increasing computational overhead. This paper presents a novel Distilled Cross-Combination Transformer (DCCT) network. Technically, we first introduce a distillation cascade fusion encoder (DCFE), where a probabilistic sparse self-attention layer is used to filter out some redundant and distracting features that affect attention focus, aiming to obtain more refined visual features and enhance encoding efficiency. Next, we develop a parallel cross-fusion attention module (PCFA) that fully exploits the complementarity and correlation between grid and region features to better fuse the encoded dual visual features. Extensive experiments conducted on the MSCOCO dataset demonstrate that our proposed DCCT method achieves outstanding performance, rivaling current state-of-the-art approaches.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173233","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bidirectional consistency with temporal-aware for semi-supervised time series classification 用于半监督时间序列分类的时间感知双向一致性
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106709
{"title":"Bidirectional consistency with temporal-aware for semi-supervised time series classification","authors":"","doi":"10.1016/j.neunet.2024.106709","DOIUrl":"10.1016/j.neunet.2024.106709","url":null,"abstract":"<div><p>Semi-supervised learning (SSL) has achieved significant success due to its capacity to alleviate annotation dependencies. Most existing SSL methods utilize pseudo-labeling to propagate useful supervised information for training unlabeled data. However, these methods ignore learning temporal representations, making it challenging to obtain a well-separable feature space for modeling explicit class boundaries. In this work, we propose a semi-supervised <strong>T</strong>ime <strong>S</strong>eries classification framework via <strong>B</strong>idirectional <strong>C</strong>onsistency with <strong>T</strong>emporal-aware (<strong>TS-BCT</strong>), which regularizes the feature space distribution by learning temporal representations through pseudo-label-guided contrastive learning. Specifically, <strong>TS-BCT</strong> utilizes time-specific augmentation to transform the entire raw time series into two distinct views, avoiding sampling bias. The pseudo-labels for each view, generated through confidence estimation in the feature space, are then employed to propagate class-related information into unlabeled samples. Subsequently, we introduce a temporal-aware contrastive learning module that learns discriminative temporal-invariant representations. Finally, we design a bidirectional consistency strategy by incorporating pseudo-labels from two distinct views into temporal-aware contrastive learning to construct a class-related contrastive pattern. This strategy enables the model to learn well-separated feature spaces, making class boundaries more discriminative. Extensive experimental results on real-world datasets demonstrate the effectiveness of TS-BCT compared to baselines.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162054","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is artificial consciousness achievable? Lessons from the human brain 人工意识可以实现吗?人脑的启示
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-07 DOI: 10.1016/j.neunet.2024.106714
{"title":"Is artificial consciousness achievable? Lessons from the human brain","authors":"","doi":"10.1016/j.neunet.2024.106714","DOIUrl":"10.1016/j.neunet.2024.106714","url":null,"abstract":"<div><p>We here analyse the question of developing artificial consciousness from an evolutionary perspective, taking the evolution of the human brain and its relation with consciousness as a reference model or as a benchmark. This kind of analysis reveals several structural and functional features of the human brain that appear to be key for reaching human-like complex conscious experience and that current research on Artificial Intelligence (AI) should take into account in its attempt to develop systems capable of human-like conscious processing. We argue that, even if AI is limited in its ability to emulate human consciousness for both intrinsic (i.e., structural and architectural) and extrinsic (i.e., related to the current stage of scientific and technological knowledge) reasons, taking inspiration from those characteristics of the brain that make human-like conscious processing possible and/or modulate it, is a potentially promising strategy towards developing conscious AI.</p><p>Also, it cannot be theoretically excluded that AI research can develop partial or potentially alternative forms of consciousness that are qualitatively different from the human form, and that may be either more or less sophisticated depending on the perspectives. Therefore, we recommend neuroscience-inspired caution in talking about artificial consciousness: since the use of the same word “consciousness” for humans and AI becomes ambiguous and potentially misleading, we propose to clearly specify which level and/or type of consciousness AI research aims to develop, as well as what would be common versus differ in AI conscious processing compared to human conscious experience.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0893608024006385/pdfft?md5=c98bc30981fde7ebf05da59255e256cb&pid=1-s2.0-S0893608024006385-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142173234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Near-optimal deep neural network approximation for Korobov functions with respect to Lp and H1 norms 关于 Lp 和 H1 规范的 Korobov 函数的近优深度神经网络近似值
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-06 DOI: 10.1016/j.neunet.2024.106702
{"title":"Near-optimal deep neural network approximation for Korobov functions with respect to Lp and H1 norms","authors":"","doi":"10.1016/j.neunet.2024.106702","DOIUrl":"10.1016/j.neunet.2024.106702","url":null,"abstract":"<div><p>This paper derives the optimal rate of approximation for Korobov functions with deep neural networks in the high dimensional hypercube with respect to <span><math><msup><mrow><mi>L</mi></mrow><mrow><mi>p</mi></mrow></msup></math></span>-norms and <span><math><msup><mrow><mi>H</mi></mrow><mrow><mn>1</mn></mrow></msup></math></span>-norm. Our approximation bounds are non-asymptotic in both the width and depth of the networks. The obtained approximation rates demonstrate a remarkable <em>super-convergence</em> feature, improving the existing convergence rates of neural networks that are continuous function approximators. Finally, using a VC-dimension argument, we show that the established rates are near-optimal.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142157635","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complementary information mutual learning for multimodality medical image segmentation 用于多模态医学图像分割的互补信息相互学习
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-06 DOI: 10.1016/j.neunet.2024.106670
{"title":"Complementary information mutual learning for multimodality medical image segmentation","authors":"","doi":"10.1016/j.neunet.2024.106670","DOIUrl":"10.1016/j.neunet.2024.106670","url":null,"abstract":"<div><p>Radiologists must utilize medical images of multiple modalities for tumor segmentation and diagnosis due to the limitations of medical imaging technology and the diversity of tumor signals. This has led to the development of multimodal learning in medical image segmentation. However, the redundancy among modalities creates challenges for existing <em>subtraction</em>-based joint learning methods, such as misjudging the importance of modalities, ignoring specific modal information, and increasing cognitive load. These thorny issues ultimately decrease segmentation accuracy and increase the risk of overfitting. This paper presents the <strong>complementary information mutual learning (CIML)</strong> framework, which can mathematically model and address the negative impact of inter-modal redundant information. CIML adopts the idea of <em>addition</em> and removes inter-modal redundant information through inductive bias-driven task decomposition and message passing-based redundancy filtering. CIML first decomposes the multimodal segmentation task into multiple subtasks based on expert prior knowledge, minimizing the information dependence between modalities. Furthermore, CIML introduces a scheme in which each modality can extract information from other modalities additively through message passing. To achieve non-redundancy of extracted information, the redundant filtering is transformed into complementary information learning inspired by the variational information bottleneck. The complementary information learning procedure can be efficiently solved by variational inference and cross-modal spatial attention. Numerical results from the verification task and standard benchmarks indicate that CIML efficiently removes redundant information between modalities, outperforming SOTA methods regarding validation accuracy and segmentation effect. To emphasize, message-passing-based redundancy filtering allows neural network visualization techniques to visualize the knowledge relationship among different modalities, which reflects interpretability.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142243883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sampled-data synchronization for fuzzy inertial cellular neural networks and its application in secure communication 模糊惯性蜂窝神经网络的采样数据同步及其在安全通信中的应用
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-09-06 DOI: 10.1016/j.neunet.2024.106671
{"title":"Sampled-data synchronization for fuzzy inertial cellular neural networks and its application in secure communication","authors":"","doi":"10.1016/j.neunet.2024.106671","DOIUrl":"10.1016/j.neunet.2024.106671","url":null,"abstract":"<div><p>This paper designs the sampled-data control (SDC) scheme to delve into the synchronization problem of fuzzy inertial cellular neural networks (FICNNs). Technically, the rate at which the information or activation of cellular neuronal transmission made can be described in a first-order differential model, but the network response concerning the received information may be dependent on time that can be modeled as a second-order (inertial) cellular neural network (ICNN) model. Generally, a fuzzy cellular neural network (FCNN) is a combination of fuzzy logic and a cellular neural network. Fuzzy logic models are composed of input and output templates which are in the form of a sum of product operations that help to evaluate the information transmission on a rule-basis. Hence, this study proposes a user-controlled FICNNs model with the same dynamic properties as FICNN model. In this regard, the synchronization approach is considerably effective in ensuring the dynamical properties of the drive (without control input) and response (with external control input). Theoretically, the synchronization between the drive-response can be ensured by analyzing the error model derived from the drive-response but due to nonlinearities, the Lyapunov stability theory can be utilized to derive sufficient stability conditions in terms of linear matrix inequalities (LMIs) that will guarantee the convergence of the error model onto the origin. Distinct from the existing stability conditions, this paper derives the stability conditions by involving the delay information in the form of a quadratic function with lower and upper bounds, which are evaluated through the negative determination lemma (NDL). Besides, numerical simulations that support the validation of proposed theoretical frameworks are discussed. As a direct application, the FICNN model is considered as a cryptosystem in image encryption and decryption algorithm, and the corresponding outcomes are illustrated along with security measures.</p></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":null,"pages":null},"PeriodicalIF":6.0,"publicationDate":"2024-09-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信