Neural Networks最新文献

筛选
英文 中文
Corrigendum to "Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuning" [Neural Networks Volume 178, October (2024), 1-11/106414]]. 海德拉:用于参数高效微调的多头低阶自适应》[《神经网络》第 178 卷,10 月(2024 年),1-11/106414]]更正。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-15 DOI: 10.1016/j.neunet.2024.106878
Sanghyeon Kim, Hyunmo Yang, Younghyun Kim, Youngjoon Hong, Eunbyung Park
{"title":"Corrigendum to \"Hydra: Multi-head Low-rank Adaptation for Parameter Efficient Fine-tuning\" [Neural Networks Volume 178, October (2024), 1-11/106414]].","authors":"Sanghyeon Kim, Hyunmo Yang, Younghyun Kim, Youngjoon Hong, Eunbyung Park","doi":"10.1016/j.neunet.2024.106878","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106878","url":null,"abstract":"","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"106878"},"PeriodicalIF":6.0,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142644985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MIU-Net: Advanced multi-scale feature extraction and imbalance mitigation for optic disc segmentation MIU-Net:用于视盘分割的先进多尺度特征提取和不平衡缓解技术
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-14 DOI: 10.1016/j.neunet.2024.106895
Yichen Xiao , Yi Shao , Zhi Chen , Ruyi Zhang , Xuan Ding , Jing Zhao , Shengtao Liu , Teruko Fukuyama , Yu Zhao , Xiaoliao Peng , Guangyang Tian , Shiping Wen , Xingtao Zhou
{"title":"MIU-Net: Advanced multi-scale feature extraction and imbalance mitigation for optic disc segmentation","authors":"Yichen Xiao ,&nbsp;Yi Shao ,&nbsp;Zhi Chen ,&nbsp;Ruyi Zhang ,&nbsp;Xuan Ding ,&nbsp;Jing Zhao ,&nbsp;Shengtao Liu ,&nbsp;Teruko Fukuyama ,&nbsp;Yu Zhao ,&nbsp;Xiaoliao Peng ,&nbsp;Guangyang Tian ,&nbsp;Shiping Wen ,&nbsp;Xingtao Zhou","doi":"10.1016/j.neunet.2024.106895","DOIUrl":"10.1016/j.neunet.2024.106895","url":null,"abstract":"<div><div>Pathological myopia is a severe eye condition that can cause serious complications like retinal detachment and macular degeneration, posing a threat to vision. Optic disc segmentation helps measure changes in the optic disc and observe the surrounding retina, aiding early detection of pathological myopia. However, these changes make segmentation difficult, resulting in accuracy levels that are not suitable for clinical use. To address this, we propose a new model called MIU-Net, which improves segmentation performance through several innovations. First, we introduce a multi-scale feature extraction (MFE) module to capture features at different scales, helping the model better identify optic disc boundaries in complex images. Second, we design a dual attention module that combines channel and spatial attention to focus on important features and improve feature use. To tackle the imbalance between optic disc and background pixels, we use focal loss to enhance the model’s ability to detect minority optic disc pixels. We also apply data augmentation techniques to increase data diversity and address the lack of training data. Our model was tested on the iChallenge-PM and iChallenge-AMD datasets, showing clear improvements in accuracy and robustness compared to existing methods. The experimental results demonstrate the effectiveness and potential of our model in diagnosing pathological myopia and other medical image processing tasks.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"182 ","pages":"Article 106895"},"PeriodicalIF":6.0,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recovering Permuted Sequential Features for effective Reinforcement Learning 为有效的强化学习恢复叠加序列特征
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-13 DOI: 10.1016/j.neunet.2024.106795
Yi Jiang , Mingxiao Feng , Wengang Zhou , Houqiang Li
{"title":"Recovering Permuted Sequential Features for effective Reinforcement Learning","authors":"Yi Jiang ,&nbsp;Mingxiao Feng ,&nbsp;Wengang Zhou ,&nbsp;Houqiang Li","doi":"10.1016/j.neunet.2024.106795","DOIUrl":"10.1016/j.neunet.2024.106795","url":null,"abstract":"<div><div>When applying Reinforcement Learning (RL) to the real-world visual tasks, two major challenges necessitate consideration: sample inefficiency and limited generalization. To address the above two challenges, previous works focus primarily on learning semantic information from the visual state for improving sample efficiency, but they do not explicitly learn other valuable aspects, such as spatial information. Moreover, they improve generalization by learning representations that are invariant to alterations of task-irrelevant variables, without considering task-relevant variables. To enhance sample efficiency and generalization of the base RL algorithm in visual tasks, we propose an auxiliary task called Recovering Permuted Sequential Features (RPSF). Our method enhances generalization by learning the spatial structure information of the agent, which can mitigate the effects of changes in both task-relevant and task-irrelevant variables. Moreover, it explicitly learns both semantic and spatial information from the visual state by disordering and subsequently recovering a sequence of features to generate more holistic representations, thereby improving sample efficiency. Extensive experiments demonstrate that our method significantly improves the sample efficiency and generalization of the base RL algorithm and outperforms various state-of-the-art baselines across diverse tasks in unseen environments. Furthermore, our method exhibits compatibility with both CNN and Transformer architectures.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"182 ","pages":"Article 106795"},"PeriodicalIF":6.0,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UMS2-ODNet: Unified-scale domain adaptation mechanism driven object detection network with multi-scale attention UMS2-ODNet:统一尺度域适应机制驱动的多尺度注意力物体检测网络。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-12 DOI: 10.1016/j.neunet.2024.106890
Yuze Li , Yan Zhang , Chunling Yang , Yu Chen
{"title":"UMS2-ODNet: Unified-scale domain adaptation mechanism driven object detection network with multi-scale attention","authors":"Yuze Li ,&nbsp;Yan Zhang ,&nbsp;Chunling Yang ,&nbsp;Yu Chen","doi":"10.1016/j.neunet.2024.106890","DOIUrl":"10.1016/j.neunet.2024.106890","url":null,"abstract":"<div><div>Unsupervised domain adaptation techniques improve the generalization capability and performance of detectors, especially when the source and target domains have different distributions. Compared with two-stage detectors, one-stage detectors (especially YOLO series) provide better real-time capabilities and become primary choices in industrial fields. In this paper, to improve cross-domain object detection performance, we propose a Unified-Scale Domain Adaptation Mechanism Driven Object Detection Network with Multi-Scale Attention (UMS<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-ODNet). UMS<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-ODNet chooses YOLOv6 as the basic framework in terms of its balance between efficiency and accuracy. UMS<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-ODNet considers the adaptation consistency across different scale feature maps, which tends to be ignored by existing methods. A unified-scale domain adaptation mechanism is designed to fully utilize and unify the discriminative information from different scales. A multi-scale attention module is constructed to further improve the multi-scale representation ability of features. A novel loss function is created to maintain the consistency of multi-scale information by considering the homology of the descriptions from the same latent feature. Multiply experiments are conducted on four widely used datasets. Our proposed method outperforms other state-of-the-art techniques, illustrating the feasibility and effectiveness of the proposed UMS<span><math><msup><mrow></mrow><mrow><mn>2</mn></mrow></msup></math></span>-ODNet.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106890"},"PeriodicalIF":6.0,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142639982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Partially multi-view clustering via re-alignment 通过重新对齐实现部分多视角聚类
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-12 DOI: 10.1016/j.neunet.2024.106884
Wenbiao Yan , Jihua Zhu , Jinqian Chen , Haozhe Cheng , Shunshun Bai , Liang Duan , Qinghai Zheng
{"title":"Partially multi-view clustering via re-alignment","authors":"Wenbiao Yan ,&nbsp;Jihua Zhu ,&nbsp;Jinqian Chen ,&nbsp;Haozhe Cheng ,&nbsp;Shunshun Bai ,&nbsp;Liang Duan ,&nbsp;Qinghai Zheng","doi":"10.1016/j.neunet.2024.106884","DOIUrl":"10.1016/j.neunet.2024.106884","url":null,"abstract":"<div><div>Multi-view clustering learns consistent information from multi-view data, aiming to achieve more significant clustering characteristics. However, data in real-world scenarios often exhibit temporal or spatial asynchrony, leading to views with unaligned instances. Existing methods primarily address this issue by learning transformation matrices to align unaligned instances, but this process of learning differentiable transformation matrices is cumbersome. To address the challenge of partially unaligned instances, we propose <strong>P</strong>artially <strong>M</strong>ulti-<strong>v</strong>iew <strong>C</strong>lustering via <strong>R</strong>e-alignment (PMVCR). Our approach integrates representation learning and data alignment through a two-stage training and a re-alignment process. Specifically, our training process consists of three stages: (i) In the coarse-grained alignment stage, we construct negative instance pairs for unaligned instances and utilize contrastive learning to preliminarily learn the view representations of the instances. (ii) In the re-alignment stage, we match unaligned instances based on the similarity of their view representations, aligning them with the primary view. (iii) In the fine-grained alignment stage, we further enhance the discriminative power of the view representations and the model’s ability to differentiate between clusters. Compared to existing models, our method effectively leverages information between unaligned samples and enhances model generalization by constructing negative instance pairs. Clustering experiments on several popular multi-view datasets demonstrate the effectiveness and superiority of our method. Our code is publicly available at <span><span>https://github.com/WenB777/PMVCR.git</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"182 ","pages":"Article 106884"},"PeriodicalIF":6.0,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645083","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Coordinating Multi-Agent Reinforcement Learning via Dual Collaborative Constraints 通过双重协作约束协调多代理强化学习
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-12 DOI: 10.1016/j.neunet.2024.106858
Chao Li , Shaokang Dong , Shangdong Yang , Yujing Hu , Wenbin Li , Yang Gao
{"title":"Coordinating Multi-Agent Reinforcement Learning via Dual Collaborative Constraints","authors":"Chao Li ,&nbsp;Shaokang Dong ,&nbsp;Shangdong Yang ,&nbsp;Yujing Hu ,&nbsp;Wenbin Li ,&nbsp;Yang Gao","doi":"10.1016/j.neunet.2024.106858","DOIUrl":"10.1016/j.neunet.2024.106858","url":null,"abstract":"<div><div>Many real-world multi-agent tasks exhibit a nearly decomposable structure, where interactions among agents within the same interaction set are strong while interactions between different sets are relatively weak. Efficiently modeling the nearly decomposable structure and leveraging it to coordinate agents can enhance the learning efficiency of multi-agent reinforcement learning algorithms for cooperative tasks, while existing works typically fail. To overcome this limitation, this paper proposes a novel algorithm named Dual Collaborative Constraints (DCC) that identifies the interaction sets as subtasks and achieves both intra-subtask and inter-subtask coordination. Specifically, DCC employs a bi-level structure to periodically distribute agents into multiple subtasks, and proposes both local and global collaborative constraints based on mutual information to facilitate both intra-subtask and inter-subtask coordination among agents. These two constraints ensure that agents within the same subtask reach a consensus on their local action selections and all of them select superior joint actions that maximize the overall task performance. Experimentally, we evaluate DCC on various cooperative multi-agent tasks, and its superior performance against multiple state-of-the-art baselines demonstrates its effectiveness.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"182 ","pages":"Article 106858"},"PeriodicalIF":6.0,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649523","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ALR-HT: A fast and efficient Lasso regression without hyperparameter tuning ALR-HT:无需调整超参数的快速高效 Lasso 回归。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-12 DOI: 10.1016/j.neunet.2024.106885
Yuhang Wang , Bin Zou , Jie Xu , Chen Xu , Yuan Yan Tang
{"title":"ALR-HT: A fast and efficient Lasso regression without hyperparameter tuning","authors":"Yuhang Wang ,&nbsp;Bin Zou ,&nbsp;Jie Xu ,&nbsp;Chen Xu ,&nbsp;Yuan Yan Tang","doi":"10.1016/j.neunet.2024.106885","DOIUrl":"10.1016/j.neunet.2024.106885","url":null,"abstract":"<div><div>Lasso regression, known for its efficacy in high-dimensional data analysis and feature selection, stands as a cornerstone in the realm of supervised learning for regression estimation. However, hyperparameter tuning for Lasso regression is often time-consuming and susceptible to noisy data in big data scenarios. In this paper we introduce a new additive Lasso regression without Hyperparameter Tuning (ALR-HT) by integrating Markov resampling with additive models. We estimate the generalization bounds of the proposed ALR-HT and establish the fast learning rate. The experimental results for benchmark datasets confirm that the proposed ALR-HT algorithm has better performance in terms of sampling and training total time, mean squared error (MSE) compared to other algorithms. We present some discussions on the ALR-HT algorithm and apply it to Ridge regression, to show its versatility and effectiveness in regularized regression scenarios.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"Article 106885"},"PeriodicalIF":6.0,"publicationDate":"2024-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142639971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Weakly supervised label learning flows 弱监督标签学习流程
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-10 DOI: 10.1016/j.neunet.2024.106892
You Lu , Wenzhuo Song , Chidubem Arachie , Bert Huang
{"title":"Weakly supervised label learning flows","authors":"You Lu ,&nbsp;Wenzhuo Song ,&nbsp;Chidubem Arachie ,&nbsp;Bert Huang","doi":"10.1016/j.neunet.2024.106892","DOIUrl":"10.1016/j.neunet.2024.106892","url":null,"abstract":"<div><div>Supervised learning usually requires a large amount of labeled data. However, attaining ground-truth labels is costly for many tasks. Alternatively, weakly supervised methods learn with cheap weak signals that only approximately label some data. Many existing weakly supervised learning methods learn a deterministic function that estimates labels given the input data and weak signals. In this paper, we develop label learning flows (LLF), a general framework for weakly supervised learning problems. Our method is a generative model based on normalizing flows. The main idea of LLF is to optimize the conditional likelihoods of all possible labelings of the data within a constrained space defined by weak signals. We develop a training method for LLF that trains the conditional flow inversely and avoids estimating the labels. Once a model is trained, we can make predictions with a sampling algorithm. We apply LLF to three weakly supervised learning problems. Experiment results show that our method outperforms many baselines we compare against.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"182 ","pages":"Article 106892"},"PeriodicalIF":6.0,"publicationDate":"2024-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Outer synchronization and outer H synchronization for coupled fractional-order reaction-diffusion neural networks with multiweights. 多权重耦合分数阶反应扩散神经网络的外同步和外 H∞ 同步
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-09 DOI: 10.1016/j.neunet.2024.106893
Jin-Liang Wang, Si-Yang Wang, Yan-Ran Zhu, Tingwen Huang
{"title":"Outer synchronization and outer H<sub>∞</sub> synchronization for coupled fractional-order reaction-diffusion neural networks with multiweights.","authors":"Jin-Liang Wang, Si-Yang Wang, Yan-Ran Zhu, Tingwen Huang","doi":"10.1016/j.neunet.2024.106893","DOIUrl":"https://doi.org/10.1016/j.neunet.2024.106893","url":null,"abstract":"<p><p>This paper introduces multiple state or spatial-diffusion coupled fractional-order reaction-diffusion neural networks, and discusses the outer synchronization and outer H<sub>∞</sub> synchronization problems for these coupled fractional-order reaction-diffusion neural networks (CFRNNs). The Lyapunov functional method, Laplace transform and inequality techniques are utilized to obtain some outer synchronization conditions for CFRNNs. Moreover, some criteria are also provided to make sure the outer H<sub>∞</sub> synchronization of CFRNNs. Finally, the derived outer and outer H<sub>∞</sub> synchronization conditions are validated on the basis of two numerical examples.</p>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"181 ","pages":"106893"},"PeriodicalIF":6.0,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142639979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Complexities of feature-based learning systems, with application to reservoir computing 基于特征的学习系统的复杂性,并将其应用于水库计算。
IF 6 1区 计算机科学
Neural Networks Pub Date : 2024-11-09 DOI: 10.1016/j.neunet.2024.106883
Hiroki Yasumoto, Toshiyuki Tanaka
{"title":"Complexities of feature-based learning systems, with application to reservoir computing","authors":"Hiroki Yasumoto,&nbsp;Toshiyuki Tanaka","doi":"10.1016/j.neunet.2024.106883","DOIUrl":"10.1016/j.neunet.2024.106883","url":null,"abstract":"<div><div>This paper studies complexity measures of reservoir systems. For this purpose, a more general model that we call a feature-based learning system, which is the composition of a feature map and of a final estimator, is studied. We study complexity measures such as growth function, VC-dimension, pseudo-dimension and Rademacher complexity. On the basis of the results, we discuss how the unadjustability of reservoirs and the linearity of readouts can affect complexity measures of the reservoir systems. Furthermore, some of the results generalize or improve the existing results.</div></div>","PeriodicalId":49763,"journal":{"name":"Neural Networks","volume":"182 ","pages":"Article 106883"},"PeriodicalIF":6.0,"publicationDate":"2024-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142645081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信