Pattern Recognition最新文献

筛选
英文 中文
A masking, linkage and guidance framework for online class incremental learning 在线班级增量学习的屏蔽、链接和引导框架
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-15 DOI: 10.1016/j.patcog.2024.111185
Guoqiang Liang , Zhaojie Chen , Shibin Su , Shizhou Zhang , Yanning Zhang
{"title":"A masking, linkage and guidance framework for online class incremental learning","authors":"Guoqiang Liang ,&nbsp;Zhaojie Chen ,&nbsp;Shibin Su ,&nbsp;Shizhou Zhang ,&nbsp;Yanning Zhang","doi":"10.1016/j.patcog.2024.111185","DOIUrl":"10.1016/j.patcog.2024.111185","url":null,"abstract":"<div><div>Due to the powerful ability to acquire new knowledge and preserve previously learned concepts from a dynamic data stream, continual learning has recently garnered substantial interest. Since training data can only be used once, online class incremental learning (OCIL) is more practical and difficult. Although replay-based OCIL methods have made great progress, there is still a severe class imbalance problem. Specifically, limited by the small memory size, the number of samples for new classes is much larger than that for old classes, which finally leads to task recency bias and abrupt feature drift. To alleviate this problem, we propose a masking, linkage, and guidance framework (MLG) for OCIL, which consists of three effective modules, i.e. batch-level logit mask (BLM, masking), batch-level feature cross fusion (BFCF, linkage) and accumulative mean feature distillation (AMFD, guidance). The former two focus on class imbalance problem while the last aims to alleviate abrupt feature drift. In BLM, we only activate the logits of classes occurring in a batch, which makes the model learn knowledge within each batch. The BFCF module employs a transformer encoder layer to fuse the sample features within a batch, which rebalances the gradients of classifier’s weights and implicitly learns the sample relationship. Instead of a strict regularization in traditional feature distillation, the proposed AMFD guides previously learned features to move on purpose, which can reduce abrupt feature drift and produce a clearer boundary in feature space. Extensive experiments on four popular datasets for OCIL have shown the effectiveness of proposed MLG framework.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111185"},"PeriodicalIF":7.5,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722938","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CDHN: Cross-domain hallucination network for 3D keypoints estimation CDHN:用于三维关键点估算的跨域幻觉网络
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-15 DOI: 10.1016/j.patcog.2024.111188
Mohammad Zohaib , Milind Gajanan Padalkar , Pietro Morerio , Matteo Taiana , Alessio Del Bue
{"title":"CDHN: Cross-domain hallucination network for 3D keypoints estimation","authors":"Mohammad Zohaib ,&nbsp;Milind Gajanan Padalkar ,&nbsp;Pietro Morerio ,&nbsp;Matteo Taiana ,&nbsp;Alessio Del Bue","doi":"10.1016/j.patcog.2024.111188","DOIUrl":"10.1016/j.patcog.2024.111188","url":null,"abstract":"<div><div>This paper presents a novel method to estimate sparse 3D keypoints from single-view RGB images. Our network is trained in two steps using a knowledge distillation framework. In the first step, the teacher is trained to extract 3D features from point cloud data, which are used in combination with 2D features to estimate the 3D keypoints. In the second step, the teacher teaches the student module to hallucinate the 3D features from RGB images that are similar to those extracted from the point clouds. This procedure helps the network during inference to extract 2D and 3D features directly from images, without requiring point clouds as input. Moreover, the network also predicts a confidence score for every keypoint, which is used to select the valid ones from a set of <em>N</em> predicted keypoints. This allows the prediction of different number of keypoints depending on the object’s geometry. We use the estimated keypoints for computing the relative pose between two views of an object. The results are compared with those of KP-Net and StarMap , which are the state-of-the-art for estimating 3D keypoints from a single-view RGB image. The average angular distance error of our approach (5.94°) is 8.46° and 55.26° lower than that of KP-Net (14.40°) and StarMap (61.20°), respectively.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111188"},"PeriodicalIF":7.5,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight remote sensing super-resolution with multi-scale graph attention network 利用多尺度图注意网络实现轻量级遥感超分辨率
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-15 DOI: 10.1016/j.patcog.2024.111178
Yu Wang , Zhenfeng Shao , Tao Lu , Xiao Huang , Jiaming Wang , Zhizheng Zhang , Xiaolong Zuo
{"title":"Lightweight remote sensing super-resolution with multi-scale graph attention network","authors":"Yu Wang ,&nbsp;Zhenfeng Shao ,&nbsp;Tao Lu ,&nbsp;Xiao Huang ,&nbsp;Jiaming Wang ,&nbsp;Zhizheng Zhang ,&nbsp;Xiaolong Zuo","doi":"10.1016/j.patcog.2024.111178","DOIUrl":"10.1016/j.patcog.2024.111178","url":null,"abstract":"<div><div>Remote Sensing Super-Resolution (RS-SR) constitutes a pivotal component in the domain of remote sensing image analysis, aimed at enhancing the spatial resolution of low-resolution imagery. Recent advancements have seen deep learning techniques achieving substantial progress in the RS-SR field. Notably, Graph Neural Networks (GNNs) have emerged as a potent mechanism for processing remote sensing images, adept at elucidating the intricate inter-pixel relationships within images. Nevertheless, a prevalent limitation among existing GNN-based methodologies is their disregard for the high computational demands, which circumscribes their applicability in environments with limited computational resources. This paper introduces a streamlined RS-SR framework, leveraging a Multi-Scale Graph Attention Network (MSGAN), designed to effectively balance computational efficiency with high performance. The core of MSGAN is a novel multi-scale graph attention module, integrating graph attention block and multi-scale lattice block structures, engineered to comprehensively assimilate both localized and extensive spatial information in remote sensing images. This enhances the framework’s overall efficacy and resilience in RS-SR tasks. Comparative experimental analyses demonstrate that MSGAN delivers competitive results against state-of-the-art methods while reducing parameter count and computational overhead, presenting a promising avenue for deployment in scenarios with limited computational resources.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111178"},"PeriodicalIF":7.5,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722943","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive learning rate algorithms based on the improved Barzilai–Borwein method 基于改进的 Barzilai-Borwein 方法的自适应学习率算法
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-15 DOI: 10.1016/j.patcog.2024.111179
Zhi-Jun Wang , Hong Li , Zhou-Xiang Xu , Shuai-Ye Zhao , Peng-Jun Wang , He-Bei Gao
{"title":"Adaptive learning rate algorithms based on the improved Barzilai–Borwein method","authors":"Zhi-Jun Wang ,&nbsp;Hong Li ,&nbsp;Zhou-Xiang Xu ,&nbsp;Shuai-Ye Zhao ,&nbsp;Peng-Jun Wang ,&nbsp;He-Bei Gao","doi":"10.1016/j.patcog.2024.111179","DOIUrl":"10.1016/j.patcog.2024.111179","url":null,"abstract":"<div><h3>Objective:</h3><div>The Barzilai–Borwein(BB) method is essential in solving unconstrained optimization problems. The momentum method accelerates optimization algorithms with exponentially weighted moving average. In order to design reliable deep learning optimization algorithms, this paper proposes applying the BB method in four variants to the optimization algorithm of deep learning.</div></div><div><h3>Findings:</h3><div>The momentum method generates the BB step size under different step range limits. We also apply the momentum method and its variants to the stochastic gradient descent with the BB step size.</div></div><div><h3>Novelty:</h3><div>The algorithm’s robustness has been demonstrated through experiments on the initial learning rate and random seeds. The algorithm’s sensitivity is tested by choosing different momentum factors until a suitable momentum factor is found. Moreover, we compare our algorithms with popular algorithms in various neural networks. The results show that the new algorithms improve the efficiency of the BB step size in deep learning and provide a variety of optimization algorithm choices.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111179"},"PeriodicalIF":7.5,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Uncertainty estimation in color constancy 色彩恒定性的不确定性估计
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-15 DOI: 10.1016/j.patcog.2024.111175
Marco Buzzelli , Simone Bianco
{"title":"Uncertainty estimation in color constancy","authors":"Marco Buzzelli ,&nbsp;Simone Bianco","doi":"10.1016/j.patcog.2024.111175","DOIUrl":"10.1016/j.patcog.2024.111175","url":null,"abstract":"<div><div>Computational color constancy is an under-determined problem. As such, a key objective is to assign a level of uncertainty to the output illuminant estimations, which can significantly impact the reliability of the corrected images for downstream computer vision tasks. In this paper we present a formalization of uncertainty estimation in color constancy, and we define three forms of uncertainty that require at most one inference run to be estimated. The defined uncertainty estimators are applied to five different categories of color constancy algorithms. The experimental results on two standard datasets show a strong correlation between the estimated uncertainty and the illuminant estimation error. Furthermore, we show how color constancy algorithms can be cascaded leveraging the estimated uncertainty to provide more accurate illuminant estimates.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111175"},"PeriodicalIF":7.5,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FocTrack: Focus attention for visual tracking FocTrack:集中注意力进行视觉跟踪
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-15 DOI: 10.1016/j.patcog.2024.111128
Jian Tao , Sixian Chan , Zhenchao Shi , Cong Bai , Shengyong Chen
{"title":"FocTrack: Focus attention for visual tracking","authors":"Jian Tao ,&nbsp;Sixian Chan ,&nbsp;Zhenchao Shi ,&nbsp;Cong Bai ,&nbsp;Shengyong Chen","doi":"10.1016/j.patcog.2024.111128","DOIUrl":"10.1016/j.patcog.2024.111128","url":null,"abstract":"<div><div>Transformer trackers have achieved widespread success based on their attention mechanism. The vanilla attention mechanism focuses on modeling the long-range dependencies between tokens to gain a global perspective. However, in human tracking behavior, the line of sight first skims apparent regions and then focuses on the differences between similar regions. To explore this issue, we build a powerful online tacker with focus attention, named FocTrack. Firstly, we design a focus attention module, which adopts the iterative binary clustering function (IBCF) before self-attention to simulate human behavior. Specifically, for a given cluster, other clusters are treated as apparent tokens that are skimmed during the clustering process, while the subsequent self-attention performs focused discriminative learning on the target cluster. Moreover, we propose a local template update strategy (LTUS) to probe into the effective temporal information for visual object tracking. In the testing, LTUS only replaces outdated local templates to ensure overall reliability and holds a low computational burden. Finally, extensive experiments show that our proposed FocTrack achieves state-of-the-art performance in several benchmarks.In particular, FocTrack achieves 71.5% AUC on the LaSOT, 84.7% AUC on the TrackingNet, and a running speed of around 36 FPS, outperforming the popular approaches.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111128"},"PeriodicalIF":7.5,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dual Contrastive Label Enhancement 双对比标签增强功能
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-15 DOI: 10.1016/j.patcog.2024.111183
Ren Guan , Yifei Wang , Xinyuan Liu , Bin Chen , Jihua Zhu
{"title":"Dual Contrastive Label Enhancement","authors":"Ren Guan ,&nbsp;Yifei Wang ,&nbsp;Xinyuan Liu ,&nbsp;Bin Chen ,&nbsp;Jihua Zhu","doi":"10.1016/j.patcog.2024.111183","DOIUrl":"10.1016/j.patcog.2024.111183","url":null,"abstract":"<div><div>Label Enhancement (LE) strives to convert logical labels of instances into label distributions to provide data preparation for label distribution learning (LDL). Existing LE methods ordinarily neglect to consider original features and logical labels as two complementary descriptive views of instances for extracting implicit related information across views, resulting in insufficient utilization of the feature and logical label information of the instances. To address this issue, we propose a novel method named Dual Contrastive Label Enhancement (DCLE). This method regards original features and logical labels as two view-specific descriptions and encodes them into a unified projection space. We employ dual contrastive learning strategy at both instance-level and class-level to excavate cross-view consensus information and distinguish instance representations by exploring inherent correlations among features, thereby generating high-level representations of the instances. Subsequently, to recover label distributions from obtained high-level representations, we design a distance-minimized and margin-penalized training strategy and preserve the consistency of label attributes. Extensive experiments conducted on 13 benchmark datasets of LDL validate the efficacy and competitiveness of DCLE.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111183"},"PeriodicalIF":7.5,"publicationDate":"2024-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722846","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning data association for multi-object tracking using only coordinates 仅使用坐标学习多目标跟踪的数据关联
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-14 DOI: 10.1016/j.patcog.2024.111169
Mehdi Miah, Guillaume-Alexandre Bilodeau, Nicolas Saunier
{"title":"Learning data association for multi-object tracking using only coordinates","authors":"Mehdi Miah,&nbsp;Guillaume-Alexandre Bilodeau,&nbsp;Nicolas Saunier","doi":"10.1016/j.patcog.2024.111169","DOIUrl":"10.1016/j.patcog.2024.111169","url":null,"abstract":"<div><div>We propose a novel Transformer-based module to address the data association problem for multi-object tracking. From detections obtained by a pretrained detector, this module uses only coordinates from bounding boxes to estimate an affinity score between pairs of tracks extracted from two distinct temporal windows. This module, named TWiX, is trained on sets of tracks with the objective of discriminating pairs of tracks coming from the same object from those which are not. Our module does not use the intersection over union measure, nor does it requires any motion priors or any camera motion compensation technique. By inserting TWiX within an online cascade matching pipeline, our tracker C-TWiX achieves state-of-the-art performance on the DanceTrack and KITTIMOT datasets, and gets competitive results on the MOT17 dataset. The code will be made available upon publication on the website <span><span>https://mehdimiah.com/twix</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111169"},"PeriodicalIF":7.5,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pseudo-labeling with keyword refining for few-supervised video captioning 针对少数人监督的视频字幕,利用关键词提炼进行伪标记
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-14 DOI: 10.1016/j.patcog.2024.111176
Ping Li , Tao Wang , Xinkui Zhao , Xianghua Xu , Mingli Song
{"title":"Pseudo-labeling with keyword refining for few-supervised video captioning","authors":"Ping Li ,&nbsp;Tao Wang ,&nbsp;Xinkui Zhao ,&nbsp;Xianghua Xu ,&nbsp;Mingli Song","doi":"10.1016/j.patcog.2024.111176","DOIUrl":"10.1016/j.patcog.2024.111176","url":null,"abstract":"<div><div>Video captioning generate a sentence that describes the video content. Existing methods always require a number of captions (e.g., 10 or 20) per video to train the model, which is quite costly. In this work, we explore the possibility of using only one or very few ground-truth sentences, and introduce a new task named few-supervised video captioning. Specifically, we propose a few-supervised video captioning framework that consists of lexically constrained pseudo-labeling module and keyword-refined captioning module. Unlike the random sampling in natural language processing that may cause invalid modifications (i.e., edit words), the former module guides the model to edit words using some actions (e.g., copy, replace, insert, and delete) by a pretrained token-level classifier, and then fine-tunes candidate sentences by a pretrained language model. Meanwhile, the former employs the repetition penalized sampling to encourage the model to yield concise pseudo-labeled sentences with less repetition, and selects the most relevant sentences upon a pretrained video-text model. Moreover, to keep semantic consistency between pseudo-labeled sentences and video content, we develop the transformer-based keyword refiner with the video-keyword gated fusion strategy to emphasize more on relevant words. Extensive experiments on several benchmarks demonstrate the advantages of the proposed approach in both few-supervised and fully-supervised scenarios.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"159 ","pages":"Article 111176"},"PeriodicalIF":7.5,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142699953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Progressive label enhancement 渐进式标签增强
IF 7.5 1区 计算机科学
Pattern Recognition Pub Date : 2024-11-14 DOI: 10.1016/j.patcog.2024.111172
Zhiqiang Kou , Jing Wang , Yuheng Jia , Xin Geng
{"title":"Progressive label enhancement","authors":"Zhiqiang Kou ,&nbsp;Jing Wang ,&nbsp;Yuheng Jia ,&nbsp;Xin Geng","doi":"10.1016/j.patcog.2024.111172","DOIUrl":"10.1016/j.patcog.2024.111172","url":null,"abstract":"<div><div>Label Distribution Learning (LDL) leverages label distribution (LD) to represent instances, which helps solve label ambiguity. However, obtaining LD can be extremely challenging in many real-world scenarios. Label Enhancement (LE) has emerged as a solution to enhance logical labels to LD since logical labels are highly available. In this paper, we explore the application of dimension reduction techniques to enhance LE. We present a learning framework known as Progressive Label Enhancement (PLE). PLE progressively conducts dependency-maximization-oriented dimension reduction and LE. First, PLE generates LD by leveraging the manifold structure within the feature space induced by dependency-maximization-driven dimension reduction. Second, PLE optimizes the projection matrix for dependency maximization based on the obtained LD. Finally, extensive experiments conducted on 15 real-world datasets consistently demonstrate that PLE outperforms the other six comparative approaches.</div></div>","PeriodicalId":49713,"journal":{"name":"Pattern Recognition","volume":"160 ","pages":"Article 111172"},"PeriodicalIF":7.5,"publicationDate":"2024-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142722847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信