IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

筛选
英文 中文
Corrections to “Windowed Two-Dimensional Fourier Transform Concentration and Its Application to ISAR Imaging” 对 "窗口二维傅立叶变换浓度及其在 ISAR 成像中的应用 "的更正
Karol Abratkiewicz
{"title":"Corrections to “Windowed Two-Dimensional Fourier Transform Concentration and Its Application to ISAR Imaging”","authors":"Karol Abratkiewicz","doi":"10.1109/TIP.2024.3517252","DOIUrl":"https://doi.org/10.1109/TIP.2024.3517252","url":null,"abstract":"Presents corrections to the paper, (Corrections to “Windowed Two-Dimensional Fourier Transform Concentration and Its Application to ISAR Imaging”).","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2241-2241"},"PeriodicalIF":0.0,"publicationDate":"2025-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10949651","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143777958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Object Adaptive Self-Supervised Dense Visual Pre-Training
Yu Zhang;Tao Zhang;Hongyuan Zhu;Zihan Chen;Siya Mi;Xi Peng;Xin Geng
{"title":"Object Adaptive Self-Supervised Dense Visual Pre-Training","authors":"Yu Zhang;Tao Zhang;Hongyuan Zhu;Zihan Chen;Siya Mi;Xi Peng;Xin Geng","doi":"10.1109/TIP.2025.3555073","DOIUrl":"10.1109/TIP.2025.3555073","url":null,"abstract":"Self-supervised visual pre-training models have achieved significant success without employing expensive annotations. Nevertheless, most of these models focus on iconic single-instance datasets (e.g. ImageNet), ignoring the insufficient discriminative representation for non-iconic multi-instance datasets (e.g. COCO). In this paper, we propose a novel Object Adaptive Dense Pre-training (OADP) method to learn the visual representation directly on the multi-instance datasets (e.g., PASCAL VOC and COCO) for dense prediction tasks (e.g., object detection and instance segmentation). We present a novel object-aware and learning-adaptive random view augmentation to focus the contrastive learning to enhance the discrimination of object presentations from large to small scale during different learning stages. Furthermore, the representations across different scale and resolutions are integrated so that the method can learn diverse representations. In the experiment, we evaluated OADP pre-trained on PASCAL VOC and COCO. Results show that our method has better performances than most existing state-of-the-art methods when transferring to various downstream tasks, including image classification, object detection, instance segmentation and semantic segmentation.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2228-2240"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143757764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploring Effective Factors for Improving Visual In-Context Learning
Yanpeng Sun;Qiang Chen;Jian Wang;Jingdong Wang;Zechao Li
{"title":"Exploring Effective Factors for Improving Visual In-Context Learning","authors":"Yanpeng Sun;Qiang Chen;Jian Wang;Jingdong Wang;Zechao Li","doi":"10.1109/TIP.2025.3554410","DOIUrl":"10.1109/TIP.2025.3554410","url":null,"abstract":"The In-Context Learning (ICL) is to understand a new task via a few demonstrations (aka. prompt) and predict new inputs without tuning the models. While it has been widely studied in NLP, it is still a relatively new area of research in computer vision. To reveal the factors influencing the performance of visual in-context learning, this paper shows that Prompt Selection and Prompt Fusion are two major factors that have a direct impact on the inference performance of visual in-context learning. Prompt selection is the process of selecting the most suitable prompt for query image. This is crucial because high-quality prompts assist large-scale visual models in rapidly and accurately comprehending new tasks. Prompt fusion involves combining prompts and query images to activate knowledge within large-scale visual models. However, altering the prompt fusion method significantly impacts its performance on new tasks. Based on these findings, we propose a simple framework prompt-SelF to improve visual in-context learning. Specifically, we first use the pixel-level retrieval method to select a suitable prompt, and then use different prompt fusion methods to activate diverse knowledge stored in the large-scale vision model, and finally, ensemble the prediction results obtained from different prompt fusion methods to obtain the final prediction results. We conducted extensive experiments on single-object segmentation and detection tasks to demonstrate the effectiveness of prompt-SelF. Remarkably, prompt-SelF has outperformed OSLSM method-based meta-learning in 1-shot segmentation for the first time. This indicated the great potential of visual in-context learning. The source code and models will be available at <uri>https://github.com/syp2ysy/prompt-SelF</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2147-2160"},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143744924","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Local Cross-Patch Activation From Multi-Direction for Weakly Supervised Object Localization
Pei Lv;Junying Ren;Genwang Han;Jiwen Lu;Mingliang Xu
{"title":"Local Cross-Patch Activation From Multi-Direction for Weakly Supervised Object Localization","authors":"Pei Lv;Junying Ren;Genwang Han;Jiwen Lu;Mingliang Xu","doi":"10.1109/TIP.2025.3554398","DOIUrl":"10.1109/TIP.2025.3554398","url":null,"abstract":"Weakly supervised object localization (WSOL) learns to localize objects using only image-level labels. Recently, some studies apply transformers in WSOL to capture the long-range feature dependency and alleviate the partial activation issue of CNN-based methods. However, existing transformer-based methods still face two challenges. The first challenge is the over-activation of backgrounds. Specifically, the object boundaries and background are often semantically similar, and localization models may misidentify the background as a part of objects. The second challenge is the incomplete activation of occluded objects, since transformer architecture makes it difficult to capture local features across patches due to ignoring semantic and spatial coherence. To address these issues, in this paper, we propose LCA-MD, a novel transformer-based WSOL method using local cross-patch activation from multi-direction, which can capture more details of local features while inhibiting the background over-activation. In LCA-MD, first, combining contrastive learning with the transformer, we propose a token feature contrast module (TCM) that can maximize the difference between foregrounds and backgrounds and further separate them more accurately. Second, we propose a semantic-spatial fusion module (SFM), which leverages multi-directional perception to capture the local cross-patch features and diffuse activation across occlusions. Experiment results on the CUB-200-2011 and ILSVRC datasets demonstrate that our LCA-MD is significantly superior and has achieved state-of-the-art results in WSOL. The project code is available at <uri>https://github.com/rjy-fighting/LCA-MD</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2213-2227"},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143744926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IDENet: An Inter-Domain Equilibrium Network for Unsupervised Cross-Domain Person Re-Identification
Xi Yang;Wenjiao Dong;Gu Zheng;Nannan Wang;Xinbo Gao
{"title":"IDENet: An Inter-Domain Equilibrium Network for Unsupervised Cross-Domain Person Re-Identification","authors":"Xi Yang;Wenjiao Dong;Gu Zheng;Nannan Wang;Xinbo Gao","doi":"10.1109/TIP.2025.3554408","DOIUrl":"10.1109/TIP.2025.3554408","url":null,"abstract":"Unsupervised person re-identification aims to retrieve a given pedestrian image from unlabeled data. For training on the unlabeled data, the method of clustering and assigning pseudo-labels has become mainstream, but the pseudo-labels themselves are noisy and will reduce the accuracy. To overcome this problem, several pseudo-label improvement methods have been proposed. But on the one hand, they only use target domain data for fine-tuning and do not make sufficient use of high-quality labeled data in the source domain. On the other hand, they ignore the critical fine-grained features of pedestrians and overfitting problems in the later training period. In this paper, we propose a novel unsupervised cross-domain person re-identification network (IDENet) based on an inter-domain equilibrium structure to improve the quality of pseudo-labels. Specifically, we make full use of both source domain and target domain information and construct a small learning network to equalize label allocation between the two domains. Based on it, we also develop a dynamic neural network with adaptive convolution kernels to generate adaptive residuals for adapting domain-agnostic deep fine-grained features. In addition, we design the network structure based on ordinary differential equations and embed modules to solve the problem of network overfitting. Extensive cross-domain experimental results on Market1501, PersonX, and MSMT17 prove that our proposed method outperforms the state-of-the-art methods.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2133-2146"},"PeriodicalIF":0.0,"publicationDate":"2025-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143744939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segment Anything Model Is a Good Teacher for Local Feature Learning
Jingqian Wu;Rongtao Xu;Zach Wood-Doughty;Changwei Wang;Shibiao Xu;Edmund Y. Lam
{"title":"Segment Anything Model Is a Good Teacher for Local Feature Learning","authors":"Jingqian Wu;Rongtao Xu;Zach Wood-Doughty;Changwei Wang;Shibiao Xu;Edmund Y. Lam","doi":"10.1109/TIP.2025.3554033","DOIUrl":"10.1109/TIP.2025.3554033","url":null,"abstract":"Local feature detection and description play an important role in many computer vision tasks, which are designed to detect and describe keypoints in any scene and any downstream task. Data-driven local feature learning methods need to rely on pixel-level correspondence for training. However, a vast number of existing approaches ignored the semantic information on which humans rely to describe image pixels. In addition, it is not feasible to enhance generic scene keypoints detection and description simply by using traditional common semantic segmentation models because they can only recognize a limited number of coarse-grained object classes. In this paper, we propose SAMFeat to introduce SAM (segment anything model), a foundation model trained on 11 million images, as a teacher to guide local feature learning. SAMFeat learns additional semantic information brought by SAM and thus is inspired by higher performance even with limited training samples. To do so, first, we construct an auxiliary task of Attention-weighted Semantic Relation Distillation (ASRD), which adaptively distillates feature relations with category-agnostic semantic information learned by the SAM encoder into a local feature learning network, to improve local feature description using semantic discrimination. Second, we develop a technique called Weakly Supervised Contrastive Learning Based on Semantic Grouping (WSC), which utilizes semantic groupings derived from SAM as weakly supervised signals, to optimize the metric space of local descriptors. Third, we design an Edge Attention Guidance (EAG) to further improve the accuracy of local feature detection and description by prompting the network to pay more attention to the edge region guided by SAM. SAMFeat’s performance on various tasks, such as image matching on HPatches, and long-term visual localization on Aachen Day-Night showcases its superiority over previous local features. The release code is available at <uri>https://github.com/vignywang/SAMFeat</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2097-2111"},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143733961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Frequency-Spatial Complementation: Unified Channel-Specific Style Attack for Cross-Domain Few-Shot Learning
Zhong Ji;Zhilong Wang;Xiyao Liu;Yunlong Yu;Yanwei Pang;Jungong Han
{"title":"Frequency-Spatial Complementation: Unified Channel-Specific Style Attack for Cross-Domain Few-Shot Learning","authors":"Zhong Ji;Zhilong Wang;Xiyao Liu;Yunlong Yu;Yanwei Pang;Jungong Han","doi":"10.1109/TIP.2025.3553781","DOIUrl":"10.1109/TIP.2025.3553781","url":null,"abstract":"Cross-Domain Few-Shot Learning (CD-FSL) addresses the challenges of recognizing targets with out-of-domain data when only a few instances are available. Many current CD-FSL approaches primarily focus on enhancing the generalization capabilities of models in spatial domain, which neglects the role of the frequency domain in domain generalization. To take advantage of frequency domain in processing global information, we propose a Frequency-Spatial Complementation (FSC) model, which combines frequency domain information with spatial domain information to learn domain-invariant information from attacked data style. Specifically, we design a Frequency and Spatial Fusion (FusionFS) module to enhance the ability of the model to capture style-related information. Besides, we propose two attack strategies, i.e., the Gradient-guided Unified Style Attack (GUSA) strategy and the Channel-specific Attack Intensity Calculation (CAIC) strategy, which conduct targeted attacks on different channels to provide more diversified style data during the training phase, especially in single-source domain scenarios where the source domain data style is homogeneous. Extensive experiments across eight target domains demonstrate that our method significantly improves the model’s performance under various styles.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2242-2253"},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143733994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Dual-Axis Style-Based Recalibration Network With Class-Wise Statistics Loss for Imbalanced Medical Image Classification
Xiaoqing Zhang;Zunjie Xiao;Jingzhe Ma;Xiao Wu;Jilu Zhao;Shuai Zhang;Runzhi Li;Yi Pan;Jiang Liu
{"title":"Adaptive Dual-Axis Style-Based Recalibration Network With Class-Wise Statistics Loss for Imbalanced Medical Image Classification","authors":"Xiaoqing Zhang;Zunjie Xiao;Jingzhe Ma;Xiao Wu;Jilu Zhao;Shuai Zhang;Runzhi Li;Yi Pan;Jiang Liu","doi":"10.1109/TIP.2025.3551128","DOIUrl":"10.1109/TIP.2025.3551128","url":null,"abstract":"Salient and small lesions (e.g., microaneurysms on fundus) both play significant roles in real-world disease diagnosis under medical image examinations. Although deep neural networks (DNNs) have achieved promising medical image classification performance, they often have limitations in capturing both salient and small lesion information, restricting performance improvement in imbalanced medical image classification. Recently, with the advent of DNN-based style transfer in medical image generation, the roles of clinical styles have attracted great interest, as they are crucial indicators of lesions. Motivated by this observation, we propose a novel Adaptive Dual-Axis Style-based Recalibration (ADSR) module, leveraging the potential of clinical styles to guide DNNs in effectively learning salient and small lesion information from a dual-axis perspective. ADSR first emphasizes salient lesion information via global style-based adaptation, then captures small lesion information with pixel-wise style-based fusion. We construct an ADSR-Net for imbalanced medical image classification by stacking multiple ADSR modules. Additionally, DNNs typically adopt cross-entropy loss for parameter optimization, which ignores the impacts of class-wise predicted probability distributions. To address this, we introduce a new Class-wise Statistics Loss (CWS) combined with CE to further boost imbalanced medical image classification results. Extensive experiments on five imbalanced medical image datasets demonstrate not only the superiority of ADSR-Net and CWS over state-of-the-art (SOTA) methods but also their improved confidence calibration results. For example, ADSR-Net with the proposed loss significantly outperforms CABNet50 by 21.39% and 27.82% in F1 and B-ACC while reducing 3.31% and 4.57% in ECE and BS on ISIC2018.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2081-2096"},"PeriodicalIF":0.0,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143733996","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Perception Assisted Transformer for Unsupervised Object Re-Identification
Shuoyi Chen;Mang Ye;Xingping Dong;Bo Du
{"title":"Perception Assisted Transformer for Unsupervised Object Re-Identification","authors":"Shuoyi Chen;Mang Ye;Xingping Dong;Bo Du","doi":"10.1109/TIP.2025.3553777","DOIUrl":"10.1109/TIP.2025.3553777","url":null,"abstract":"Unsupervised object re-identification (Re-ID) aims to learn discriminative features without identity annotations. Existing mainstream methods are usually developed based on convolutional neural networks for feature extraction and pseudo-label estimation. However, convolutional neural networks suffer from limitations in capturing dispersed long-range dependencies and integrating global information. In comparison, vision transformers demonstrate superior robustness in complex environments, leveraging their versatile modeling capabilities to process diverse data structures with greater precision. In this paper, we delve into the potential of vision transformers in unsupervised Re-ID, proposing a Transformer-based perception-assisted framework (PAT). Considering Re-ID is a typical fine-grained task, existing unsupervised Re-ID methods relying on pseudo-labels generated by clustering algorithms provide only category-level discriminative supervision, with limited attention to local details. Therefore, we propose a novel target-aware mask alignment (TMA) strategy that provides additional supervision signals by leveraging low-level visual cues. Specifically, we employ pseudo-labels to guide the fine-grained alignment of features with local pixel information from critical discriminative regions. This method establishes a mutual learning mechanism via a shared Transformer, effectively balancing discriminative learning and detailed understanding. Furthermore, we propose a perceptual fusion feature augmentation (PFA) method to optimize instance-level discriminative learning. The proposed method is evaluated on multiple Re-ID datasets, demonstrating superior performance and robustness in comparison to state-of-the-art techniques. Notably, without annotations, our method achieves better results than many supervised counterparts. The code will be released.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2112-2123"},"PeriodicalIF":0.0,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143723295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mutually Reinforcing Learning of Decoupled Degradation and Diffusion Enhancement for Unpaired Low-Light Image Lightening
Kangle Wu;Jun Huang;Yong Ma;Fan Fan;Jiayi Ma
{"title":"Mutually Reinforcing Learning of Decoupled Degradation and Diffusion Enhancement for Unpaired Low-Light Image Lightening","authors":"Kangle Wu;Jun Huang;Yong Ma;Fan Fan;Jiayi Ma","doi":"10.1109/TIP.2025.3553070","DOIUrl":"10.1109/TIP.2025.3553070","url":null,"abstract":"Denoising Diffusion Probabilistic Model (DDPM) has demonstrated exceptional performance in low-light enhancement task. However, the dependency on paired training datas has left the generality of DDPM in low-light enhancement largely untapped. Therefore, this paper proposes a mutually reinforcing learning framework of decoupled degradation and diffusion enhancement, named MRLIE, which leverages style guidance from unpaired low-light images to generate pseudo-image pairs that are consistent with the target domain, thereby optimizing the latter diffusion enhancement network in a supervised manner. During the degradation process, the diffusion loss of fixed enhancement network serves as a evaluation metric for structure consistency and is combined with adversarial style loss to form the optimization objective for degradation network. Such loss design ensures that scene structure information is retained during the degradation process. During the enhancement process, the degradation network with frozen parameters continuously generates pseudo-paired low-/normal-light image pairs as training datas, thus the diffusion enhancement network could be progressively optimized. On the whole, the two processes are interdependent and could achieve cooperative improvement in terms of degradation realism and enhancement quality through iterative optimization. Additionally, we propose the Retinex-based decoupled degradation strategy for simulating the complex degradation in real low-light imaging, which ensures the color correction and noise suppression capabilities of latter diffusion enhancement network. Extensive experiments show that MRLIE can achieve promising results and better generality across various datasets.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2020-2035"},"PeriodicalIF":0.0,"publicationDate":"2025-03-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143723297","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信