IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

筛选
英文 中文
Denoised and Dynamic Alignment Enhancement for Zero-Shot Learning
Jiannan Ge;Zhihang Liu;Pandeng Li;Lingxi Xie;Yongdong Zhang;Qi Tian;Hongtao Xie
{"title":"Denoised and Dynamic Alignment Enhancement for Zero-Shot Learning","authors":"Jiannan Ge;Zhihang Liu;Pandeng Li;Lingxi Xie;Yongdong Zhang;Qi Tian;Hongtao Xie","doi":"10.1109/TIP.2025.3544481","DOIUrl":"10.1109/TIP.2025.3544481","url":null,"abstract":"Zero-shot learning (ZSL) focuses on recognizing unseen categories by aligning visual features with semantic information. Recent advancements have shown that aligning each attribute with its corresponding visual region significantly improves zero-shot learning performance. However, the crude semantic proxies used in these methods fail to capture the varied appearances of each attribute, and are also easily confused by the presence of semantically redundant backgrounds, leading to suboptimal alignment. To combat these issues, we introduce a novel Alignment-Enhanced Network (AENet), designed to denoise the visual features and dynamically perceive semantic information, thus enhancing visual-semantic alignment. Our approach comprises two key innovations. (1) A visual denoising encoder, employing a class-agnostic mask to filter out semantically redundant visual information, thus producing refined visual features adaptable to unseen classes. (2) A dynamic semantic generator that crafts content-aware semantic proxies adaptively, steered by visual features, enabling AENet to discriminate fine-grained variations in visual contents. Additionally, we integrate a cross-fusion module to ensure comprehensive interaction between the denoised visual features and the generated dynamic semantic proxies, further facilitating visual-semantic alignment. Through extensive experiments across three datasets, the proposed method demonstrates that it narrows down the visual-semantic gap and sets a new benchmark in this setting.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1501-1515"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143538775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Camera Pedestrian Trajectory Retrieval Based on Linear Trajectory Manifolds
Xin Zhang;Xiaohua Xie;Jianhuang Lai
{"title":"Cross-Camera Pedestrian Trajectory Retrieval Based on Linear Trajectory Manifolds","authors":"Xin Zhang;Xiaohua Xie;Jianhuang Lai","doi":"10.1109/TIP.2025.3544494","DOIUrl":"10.1109/TIP.2025.3544494","url":null,"abstract":"The goal of pedestrian trajectory retrieval is to infer the multi-camera path of a targeted pedestrian using images or videos from a camera network, which is crucial for passenger flow analytics and individual pedestrian retrieval. Conventional approaches hinge on spatiotemporal modeling, necessitating the gathering of positional information for each camera and trajectory data between every camera pair for the training phase. To mitigate these stringent requirements, our proposed methodology employs solely temporal information for modeling. Specifically, we introduce an Implicit Trajectory Encoding scheme, dubbed Temporal Rotary Position Embedding (T-RoPE), which integrates the temporal aspects of within-camera tracklets directly into their visual representations, thereby shaping a novel feature space. Our analysis reveals that, within this refined feature space, the challenge of inter-camera trajectory extraction can be effectively addressed by delineating a linear trajectory manifold. The visual characteristics gleaned from each candidate trajectory are utilized to compare and rank against the query feature, culminating in the ultimate trajectory retrieval outcome. To validate our method, we collected a new pedestrian trajectory dataset from a multi-storey shopping mall, namely the Mall Trajectory Dataset. Extensive experimentation across diverse datasets has demonstrated the versatility of our T-RoPE module as a plug-and-play enhancement to various network architectures, significantly enhancing the precision of pedestrian trajectory retrieval tasks. The dataset and code are released at <uri>https://github.com/zhangxin1995/MTD</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1545-1559"},"PeriodicalIF":0.0,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143538780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rethinking Semantic Segmentation With Multi-Grained Logical Prototype
Anzhu Yu;Kuiliang Gao;Xiong You;Yanfei Zhong;Yu Su;Bing Liu;Chunping Qiu
{"title":"Rethinking Semantic Segmentation With Multi-Grained Logical Prototype","authors":"Anzhu Yu;Kuiliang Gao;Xiong You;Yanfei Zhong;Yu Su;Bing Liu;Chunping Qiu","doi":"10.1109/TIP.2025.3543052","DOIUrl":"10.1109/TIP.2025.3543052","url":null,"abstract":"The last decade has witnessed significant advances in semantic segmentation brought about by deep learning. However, existing methods only fit the data-label correspondence in a data-driven manner and do not fully conform to the abstraction and structuralization characteristics of the human visual cognition process, which limits the upper bounds of their performance. To this end, a multi-grained logical prototype (MGLP) method is proposed to rethink semantic segmentation based on these two key characteristics. Its novel design can be summarized as follows. 1) For abstraction, prototypes of the same class at different grain levels are established: a label generation method is proposed to automatically generate a multi-grained label space, which can guide the learning of the multi-grained prototypes for each class. 2) For structuralization, the intrinsic logical structure across different semantic levels is explicitly modeled: the horizontal metric relationships are established via metric relation operations on prototypes at the same grain level, to improve the discriminability between classes while taking the vertical semantic hierarchy into account. Moveover, the vertical logical relationships are established as the sub-to-super positive and super-to-sub negative constraints, to strengthen the semantic dependencies among prototypes at different grain levels. 3)MGLP is plug-and-play and can be directly combined with existing segmentation methods. Extensive experimental results indicate that MGLP can significantly improve the segmentation performance of existing methods, which opens up a new avenue for future research.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1469-1484"},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Dynamic Atomic Column Detection in Transmission Electron Microscopy Videos via Ridge Estimation
Yuchen Xu;Andrew M. Thomas;Peter A. Crozier;David S. Matteson
{"title":"Dynamic Atomic Column Detection in Transmission Electron Microscopy Videos via Ridge Estimation","authors":"Yuchen Xu;Andrew M. Thomas;Peter A. Crozier;David S. Matteson","doi":"10.1109/TIP.2025.3543138","DOIUrl":"10.1109/TIP.2025.3543138","url":null,"abstract":"Ridge detection is a classical tool to extract curvilinear features in image processing. As such, it has great promise in applications to material science problems; specifically, for trend filtering relatively stable atom-shaped objects in image sequences, such as bright-field Transmission Electron Microscopy (TEM) videos. Standard analysis of TEM videos is limited to frame-by-frame object recognition. We instead harness temporal correlation across frames through simultaneous analysis of long image sequences, specified as a spatio-temporal image tensor. We define new ridge detection algorithms to non-parametrically estimate explicit trajectories of atomic-level object locations as a continuous function of time. Our approach is specially tailored to handle temporal analysis of objects that seemingly stochastically disappear and subsequently reappear throughout a sequence. We demonstrate that the proposed method is highly effective in simulation scenarios, and delivers notable performance improvements in TEM experiments compared to other material science benchmarks.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1588-1601"},"PeriodicalIF":0.0,"publicationDate":"2025-02-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reserve to Adapt: Mining Inter-Class Relations for Open-Set Domain Adaptation
Yujun Tong;Dongliang Chang;Da Li;Xinran Wang;Kongming Liang;Zhongjiang He;Yi-Zhe Song;Zhanyu Ma
{"title":"Reserve to Adapt: Mining Inter-Class Relations for Open-Set Domain Adaptation","authors":"Yujun Tong;Dongliang Chang;Da Li;Xinran Wang;Kongming Liang;Zhongjiang He;Yi-Zhe Song;Zhanyu Ma","doi":"10.1109/TIP.2025.3534023","DOIUrl":"10.1109/TIP.2025.3534023","url":null,"abstract":"Open-Set Domain Adaptation (OSDA) aims at adapting a model trained on a labelled source domain, to an unlabeled target domain that is corrupted with unknown classes. The key challenge inherent to this open-set setting is therefore how best to avoid the negative transfer incurred by unknown classes during model adaptation. Most existing works tackle this challenge by simply pushing the entire unknown classes away. In this paper, we take a different stance – instead of addressing these unknown classes as a single entity, we “reserve” in-between spaces for their subsets in the learned embedding. Our key finding is that the inter-class relations learned off the source domain, can help to enforce class separations in the target domain – thereby reserving spaces for unknown classes. More specifically, we first prep the “reservation” by tightening the known-class representations while enlarging their inter-class margin. We then learn soft-label prototypes in the source domain to facilitate the discrimination of known and unknown samples in the target domain. It follows that these two steps are iterated at each epoch in a mutually beneficial manner – better discrimination of unknown samples helps with space reservation, and vice versa. We show state-of-the-art results on four standard OSDA datasets, Office-31, Office-Home, VisDA and ImageCLEF, and conduct further analysis to help understand our method. Codes are available at: <uri>https://github.com/PRIS-CV/Reserve_to_Adapt</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1382-1397"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143495332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization Design of Projection Grating Wavelength for Robust 3D Imaging
Jianhua Wang;Yanxi Yang
{"title":"Optimization Design of Projection Grating Wavelength for Robust 3D Imaging","authors":"Jianhua Wang;Yanxi Yang","doi":"10.1109/TIP.2025.3541543","DOIUrl":"10.1109/TIP.2025.3541543","url":null,"abstract":"For three-dimensional (3D) imaging based on fringe projection profilometry (FPP), maximum fringe frequency selection and fringe frequencies allocation have a significant impact on the accuracy and robustness of 3D imaging. In this paper, we conduct a detailed analysis of the wrapped phase error, and analyze the phase unwrapping reliability in the three-frequency temporal phase unwrapping (TPU). Since different measurement systems and scenes having different maximum sampling frequencies, we introduce a maximum frequency selection approach in this work. In order to ensure the overall phase unwrapping reliability, we introduce an optimal frequencies allocation approach. Experimental results show the valid of the proposed approach. The research in this paper will help to improve the accuracy and robustness of FPP in practical 3D measurement.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1398-1411"},"PeriodicalIF":0.0,"publicationDate":"2025-02-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143495334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Clustering With Transition Probabilities Learning
Xingyu Xue;Wenhui Zhao;Quanxue Gao;Ming Yang;Cheng Deng
{"title":"Image Clustering With Transition Probabilities Learning","authors":"Xingyu Xue;Wenhui Zhao;Quanxue Gao;Ming Yang;Cheng Deng","doi":"10.1109/TIP.2025.3542602","DOIUrl":"10.1109/TIP.2025.3542602","url":null,"abstract":"Large-scale multi-view clustering for image data has achieved impressive clustering performance and efficiency. However, most methods lack interpretability in clustering and do not fully consider the complementarity of distributions between different views. To address these problems, we introduce Multi-View Clustering with Transition Probabilities Learning (MVC-TPL). Specifically, we construct an anchor graph factorization model from the perspective of transition probabilities, while simultaneously learning transition probability matrices from samples to clusters and from anchor points to clusters, serving as soft label matrices for samples and anchor points, respectively. This model enables one-step label acquisition and provides the model with a sound probability interpretation. Moreover, since the clusters of samples and anchor points should be consistent across all views, we employ Schatten p-norm regularization on the two matrices, effectively mining the complementary information distributed among the views, thereby aligning the labels across views more consistently. Comprehensive testing on four small-scale datasets and three large-scale datasets confirms the effectiveness of this model.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1441-1453"},"PeriodicalIF":0.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143486022","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When Adversarial Training Meets Prompt Tuning: Adversarial Dual Prompt Tuning for Unsupervised Domain Adaptation
Chaoran Cui;Ziyi Liu;Shuai Gong;Lei Zhu;Chunyun Zhang;Hui Liu
{"title":"When Adversarial Training Meets Prompt Tuning: Adversarial Dual Prompt Tuning for Unsupervised Domain Adaptation","authors":"Chaoran Cui;Ziyi Liu;Shuai Gong;Lei Zhu;Chunyun Zhang;Hui Liu","doi":"10.1109/TIP.2025.3541868","DOIUrl":"10.1109/TIP.2025.3541868","url":null,"abstract":"Unsupervised domain adaptation (UDA) aims to adapt models learned from a well-annotated source domain to a target domain, where only unlabeled samples are available. To this end, adversarial training is widely used in conventional UDA methods to reduce the discrepancy between source and target domains. Recently, prompt tuning has emerged as an efficient way to adapt large pre-trained vision-language models like CLIP to a variety of downstream tasks. In this paper, we present a novel method named Adversarial DuAl Prompt Tuning (ADAPT) for UDA, which employs text prompts and visual prompts to guide CLIP simultaneously. Rather than simply performing a joint optimization of text prompts and visual prompts, we integrate text prompt tuning and visual prompt tuning into a collaborative framework where they engage in an adversarial game: text prompt tuning focuses on distinguishing between source and target images, whereas visual prompt tuning seeks to align source and target domains. Unlike most existing adversarial training-based UDA approaches, ADAPT does not require explicit domain discriminators for domain alignment. Instead, the objective is effectively achieved at both global and category levels through modeling the joint probability distribution of images on domains and categories. Extensive experiments on four benchmark datasets demonstrate the effectiveness of our ADAPT method for UDA. We have released our code at <uri>https://github.com/Liuziyi1999/ADAPT</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1427-1440"},"PeriodicalIF":0.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143486021","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal Graph Learning-Based Label Propagation for Cross-Domain Image Classification
Wei Wang;Mengzhu Wang;Chao Huang;Cong Wang;Jie Mu;Feiping Nie;Xiaochun Cao
{"title":"Optimal Graph Learning-Based Label Propagation for Cross-Domain Image Classification","authors":"Wei Wang;Mengzhu Wang;Chao Huang;Cong Wang;Jie Mu;Feiping Nie;Xiaochun Cao","doi":"10.1109/TIP.2025.3526380","DOIUrl":"10.1109/TIP.2025.3526380","url":null,"abstract":"Label propagation (LP) is a popular semi-supervised learning technique that propagates labels from a training dataset to a test one using a similarity graph, assuming that nearby samples should have similar labels. However, the recent cross-domain problem assumes that training (source domain) and test data sets (target domain) follow different distributions, which may unexpectedly degrade the performance of LP due to small similarity weights connecting the two domains. To address this problem, we propose optimal graph learning-based label propagation (OGL2P), which optimizes one cross-domain graph and two intra-domain graphs to connect the two domains and preserve domain-specific structures, respectively. During label propagation, the cross-domain graph draws two labels close if they are nearby in feature space and from different domains, while the intra-domain graph pulls two labels close if they are nearby in feature space and from the same domain. This makes label propagation more insensitive to cross-domain problems. During graph embedding, we optimize the three graphs using features and labels in the embedded subspace to extract locally discriminative and domain-invariant features and make the graph construction process robust to noise in the original feature space. Notably, as a more relaxed constraint, locally discriminative and domain-invariant can somewhat alleviate the contradiction between discriminability and domain-invariance. Finally, we conduct extensive experiments on five cross-domain image classification datasets to verify that OGL2P outperforms some state-of-the-art cross-domain approaches.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1529-1544"},"PeriodicalIF":0.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143486026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MaCon: A Generic Self-Supervised Framework for Unsupervised Multimodal Change Detection
Jian Wang;Li Yan;Jianbing Yang;Hong Xie;Qiangqiang Yuan;Pengcheng Wei;Zhao Gao;Ce Zhang;Peter M. Atkinson
{"title":"MaCon: A Generic Self-Supervised Framework for Unsupervised Multimodal Change Detection","authors":"Jian Wang;Li Yan;Jianbing Yang;Hong Xie;Qiangqiang Yuan;Pengcheng Wei;Zhao Gao;Ce Zhang;Peter M. Atkinson","doi":"10.1109/TIP.2025.3542276","DOIUrl":"10.1109/TIP.2025.3542276","url":null,"abstract":"Change detection(CD) is important for Earth observation, emergency response and time-series understanding. Recently, data availability in various modalities has increased rapidly, and multimodal change detection (MCD) is gaining prominence. Given the scarcity of datasets and labels for MCD, unsupervised approaches are more practical for MCD. However, previous methods typically either merely reduce the gap between multimodal data through transformation or feed the original multimodal data directly into the discriminant network for difference extraction. The former faces challenges in extracting precise difference features. The latter contains the pronounced intrinsic distinction between the original multimodal data; direct extraction and comparison of features usually introduce significant noise, thereby compromising the quality of the resultant difference image. In this article, we proposed the MaCon framework to synergistically distill the common and discrepancy representations. The MaCon framework unifies mask reconstruction (MR) and contrastive learning (CL) self-supervised paradigms, where the MR serves the purpose of transformation while CL focuses on discrimination. Moreover, we presented an optimal sampling strategy in the CL architecture, enabling the CL subnetwork to extract more distinguishable discrepancy representations. Furthermore, we developed an effective silent attention mechanism that not only enhances contrast in output representations but stabilizes the training. Experimental results on both multimodal and monomodal datasets demonstrate that the MaCon framework effectively distills the intrinsic common representations between varied modalities and manifests state-of-the-art performance across both multimodal and monomodal CD. Such findings imply that the MaCon possesses the potential to serve as a unified framework in the CD and relevant fields. Source code will be publicly available once the article is accepted.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"1485-1500"},"PeriodicalIF":0.0,"publicationDate":"2025-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143486024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信