IEEE transactions on image processing : a publication of the IEEE Signal Processing Society最新文献

筛选
英文 中文
Eigenpose: Occlusion-Robust 3D Human Mesh Reconstruction 特征:遮挡鲁棒三维人体网格重建
Mi-Gyeong Gwon;Gi-Mun Um;Won-Sik Cheong;Wonjun Kim
{"title":"Eigenpose: Occlusion-Robust 3D Human Mesh Reconstruction","authors":"Mi-Gyeong Gwon;Gi-Mun Um;Won-Sik Cheong;Wonjun Kim","doi":"10.1109/TIP.2025.3559788","DOIUrl":"10.1109/TIP.2025.3559788","url":null,"abstract":"A new approach for occlusion-robust 3D human mesh reconstruction from a single image is introduced in this paper. Since occlusion has emerged as a major problem to be resolved in this field, there have been meaningful efforts to deal with various types of occlusions (e.g., person-to-person occlusion, person-to-object occlusion, self-occlusion, etc.). Although many recent studies have shown the remarkable progress, previous regression-based methods still have respective limitations to handle occlusion problems due to the lack of the appearance information. To address this problem, we propose a novel method for human mesh reconstruction based on the pose-relevant subspace analysis. Specifically, we first generate a set of eigenvectors, so-called eigenposes, by conducting the singular value decomposition (SVD) of the pose matrix, which contains diverse poses sampled from the training set. These eigenposes are then linearly combined to construct a target body pose according to fusing coefficients, which are learned through the proposed network. Such combination of principal body postures (i.e., eigenposes) in a global manner gives a great help to cope with partial ambiguities by occlusions. Furthermore, we also propose to exploit a joint injection module that efficiently incorporates the spatial information of visible joints into the encoded feature during the estimation process of fusing coefficients. Experimental results on benchmark datasets demonstrate the ability of the proposed method to robustly reconstruct the human mesh under various occlusions occurring in real-world scenarios. The code and model are publicly available at: <monospace><uri>https://github.com/DCVL-3D/Eigenpose_release</uri></monospace>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2379-2391"},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143841834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MuseumMaker: Continual Style Customization Without Catastrophic Forgetting MuseumMaker:持续的风格定制,没有灾难性的忘记补充材料
Chenxi Liu;Gan Sun;Wenqi Liang;Jiahua Dong;Can Qin;Yang Cong
{"title":"MuseumMaker: Continual Style Customization Without Catastrophic Forgetting","authors":"Chenxi Liu;Gan Sun;Wenqi Liang;Jiahua Dong;Can Qin;Yang Cong","doi":"10.1109/TIP.2025.3553024","DOIUrl":"10.1109/TIP.2025.3553024","url":null,"abstract":"Pre-trainedlarge text-to-image (T2I) models with an appropriate text prompt has attracted growing interests in customized image generation fields. However, catastrophic forgetting issue makes it hard to continually synthesize new user-provided styles while retaining the satisfying results amongst learned styles. In this paper, we propose MuseumMaker, a method that enables the synthesis of images by following a set of customized styles in a never-end manner, and gradually accumulates these creative artistic works as a Museum. When facing with a new customization style, we develop a style distillation loss module to extract and learn the styles of the training data for new image generation task. It can minimize the learning biases caused by content of new training images, and address the catastrophic overfitting issue induced by few-shot images. To deal with catastrophic forgetting issue amongst past learned styles, we devise a dual regularization for shared-LoRA module to optimize the direction of model update, which could regularize the diffusion model from both weight and feature aspects, respectively. Meanwhile, to further preserve historical knowledge from past styles and address the limited representability of LoRA, we design a task-wise token learning module where a unique token embedding is learned to denote a new style. As any new user-provided style come, our MuseumMaker can capture the nuances of the new styles while maintaining the details of learned styles. Experimental results on diverse style datasets validate the effectiveness of our proposed MuseumMaker method, showcasing its robustness and versatility across various scenarios.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2499-2512"},"PeriodicalIF":0.0,"publicationDate":"2025-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143841473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate and Robust Three-Intersection-Chord-Invariant Ellipse Detection 精确鲁棒的三交弦不变椭圆检测
Guan Xu;Yunkun Wang;Fang Chen;Hui Shen;Xiaotao Li
{"title":"Accurate and Robust Three-Intersection-Chord-Invariant Ellipse Detection","authors":"Guan Xu;Yunkun Wang;Fang Chen;Hui Shen;Xiaotao Li","doi":"10.1109/TIP.2025.3559409","DOIUrl":"10.1109/TIP.2025.3559409","url":null,"abstract":"Ellipse detection is of great significance in the fields of image processing and computer vision. Accurate, stable and direct ellipse detection in real-world images has always been a key issue. Therefore, an ellipse detection method is proposed on the basis of the constructed three-intersection-chord-invariant. First, in the inflexion point detection, the PCA minimum bounding box considering the distribution characteristics of edge points is studied to achieve the more refined line segment screening. Second, a multi-scale inflexion point detection method is proposed to effectively avoid over-segmentation of small arc segments, providing assurance for more reasonable and reliable arc segment combinations. Then, the 20 precisely classified arc segment combinations are refined into 4 combinations. A number of non-homologous arc segment combinations can be quickly removed to reduce incorrect combinations by the constructed midpoint distance constraint and quadrant constraint. Moreover, in order to accurately reflect the strict arc segment combination constraints of geometric features of ellipses, a three-intersection-chord-invariant model of ellipses is established with strong constraint of relative distances among five constraint points, by which a more robust initial ellipse set of homologous arc segment combinations is further obtained. Finally, ellipse validation and clustering are performed on the initial set of ellipses to obtain the high-precision ellipses. The algorithm accuracy of the ellipse detection method is experimentally validated on 6 publicly available datasets and 2 established wheel rim datasets.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2392-2407"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Session-Guided Attention in Continuous Learning With Few Samples 小样本连续学习中的会话引导注意力
Zicheng Pan;Xiaohan Yu;Yongsheng Gao
{"title":"Session-Guided Attention in Continuous Learning With Few Samples","authors":"Zicheng Pan;Xiaohan Yu;Yongsheng Gao","doi":"10.1109/TIP.2025.3559463","DOIUrl":"10.1109/TIP.2025.3559463","url":null,"abstract":"Few-shot class-incremental learning (FSCIL) aims to learn from a sequence of incremental data sessions with a limited number of samples in each class. The main issues it encounters are the risk of forgetting previously learned data when introducing new data classes, as well as not being able to adapt the old model to new data due to limited training samples. Existing state-of-the-art solutions normally utilize pre-trained models with fixed backbone parameters to avoid forgetting old knowledge. While this strategy preserves previously learned features, the fixed nature of the backbone limits the model’s ability to learn optimal representations for unseen classes, which compromises performance on new class increments. In this paper, we propose a novel SEssion-Guided Attention framework (SEGA) to tackle this challenge. SEGA exploits the class relationships within each incremental session by assessing how test samples relate to class prototypes. This allows accurate incremental session identification for test data, leading to more precise classifications. In addition, an attention module is introduced for each incremental session to further utilize the feature from the fixed backbone. As the session of the testing image is determined, we can fine-tune the feature with the corresponding attention module to better cluster the sample within the selected session. Our approach adopts the fixed backbone strategy to avoid forgetting the old knowledge while achieving novel data adaptation. Experimental results on three FSCIL datasets consistently demonstrate the superior adaptability of the proposed SEGA framework in FSCIL tasks. The code is available at: <uri>https://github.com/zichengpan/SEGA</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2654-2666"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Keep and Extent: Unified Knowledge Embedding for Few-Shot Image Generation 保持与扩展:面向少拍图像生成的统一知识嵌入
Chenghao Xu;Jiexi Yan;Cheng Deng
{"title":"Keep and Extent: Unified Knowledge Embedding for Few-Shot Image Generation","authors":"Chenghao Xu;Jiexi Yan;Cheng Deng","doi":"10.1109/TIP.2025.3557578","DOIUrl":"10.1109/TIP.2025.3557578","url":null,"abstract":"Training Generative Adversarial Networks (GANs) with few-shot data has been a challenging task, which is prevalently solved by adapting a deep generative model pre-trained on the large-scale data in a source domain to small target domains with limited training data. In practice, most of the existing methods focus on designing task-specific fine-tuning strategies or regularization terms to select and preserve compatible knowledge across the source and target domain. However, the compatible knowledge greatly depends on the target domain and is entangled with the incompatible one. For the few-shot image generation task, without accurate compatible knowledge as prior, the generated images will strongly overfit the scarce target images. From a different perspective, we propose a unified learning paradigm for better knowledge transfer, i.e., keep and extent (KAE). Specifically, we orthogonally decompose the latent space of GANs, where the resting direction that has an unnoticeable impact on the generated images is adopted to extend the new target latent subspace while the remaining directions keep intact to reconstruct the source latent subspace. In this way, the whole source domain knowledge is included in the source latent subspace and the compatible knowledge will be automatically transferred to the target domain along the resting direction, rather than manually selecting. Extensive experimental results on several benchmark datasets demonstrate the superiority of our method.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2315-2324"},"PeriodicalIF":0.0,"publicationDate":"2025-04-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143836778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Contrastive-Learning Framework for Unsupervised Salient Object Detection 用于无监督突出物体检测的对比学习框架
Huankang Guan;Jiaying Lin;Rynson W. H. Lau
{"title":"A Contrastive-Learning Framework for Unsupervised Salient Object Detection","authors":"Huankang Guan;Jiaying Lin;Rynson W. H. Lau","doi":"10.1109/TIP.2025.3558674","DOIUrl":"10.1109/TIP.2025.3558674","url":null,"abstract":"Existing unsupervised salient object detection (USOD) methods usually rely on low-level saliency priors, such as center and background priors, to detect salient objects, resulting in insufficient high-level semantic understanding. These low-level priors can be fragile and lead to failure when the natural images do not satisfy the prior assumptions, e.g., these methods may fail to detect those off-center salient objects causing fragmented objects in the segmentation. To address these problems, we propose to eliminate the dependency on flimsy low-level priors, and extract high-level saliency from natural images through a contrastive learning framework. To this end, we propose a Contrastive Saliency Network (CSNet), which is a prior-free and label-free saliency detector, with two novel modules: 1) a Contrastive Saliency Extraction (CSE) module to extract high-level saliency cues, by mimicking the human attention mechanism within an instance discriminative task through a contrastive learning framework, and 2) a Feature Re-Coordinate (FRC) module to recover spatial details, by calibrating high-level features with low-level features in an unsupervised fashion. In addition, we introduce a novel local appearance triplet (LAT) loss to assist the training process by encouraging similar saliency scores for regions with homogeneous appearances. Extensive experiments show that our approach is effective and outperforms state-of-the-art methods on popular SOD benchmarks.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2487-2498"},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143831710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CharacterFactory: Sampling Consistent Characters With GANs for Diffusion Models CharacterFactory:用 GAN 为扩散模型提取一致的角色样本
Qinghe Wang;Baolu Li;Xiaomin Li;Bing Cao;Liqian Ma;Huchuan Lu;Xu Jia
{"title":"CharacterFactory: Sampling Consistent Characters With GANs for Diffusion Models","authors":"Qinghe Wang;Baolu Li;Xiaomin Li;Bing Cao;Liqian Ma;Huchuan Lu;Xu Jia","doi":"10.1109/TIP.2025.3558668","DOIUrl":"10.1109/TIP.2025.3558668","url":null,"abstract":"Recent advances in text-to-image models have opened new frontiers in human-centric generation. However, these models cannot be directly employed to generate images with consistent newly coined identities. In this work, we propose CharacterFactory, a framework that allows sampling new characters with consistent identities in the latent space of GANs for diffusion models. More specifically, we consider the word embeddings of celeb names as ground truths for the identity-consistent generation task and train a GAN model to learn the mapping from a latent space to the celeb embedding space. In addition, we design a context-consistent loss to ensure that the generated identity embeddings can produce identity-consistent images in various contexts. Remarkably, the whole model only takes 10 minutes for training, and can sample infinite characters end-to-end during inference. Extensive experiments demonstrate excellent performance of the proposed CharacterFactory on character creation in terms of identity consistency and editability. Furthermore, the generated characters can be seamlessly combined with the off-the-shelf image/video/3D diffusion models. We believe that the proposed CharacterFactory is an important step for identity-consistent character generation. Code and Gradio demo are available at: <uri>https://qinghew.github.io/CharacterFactory/</uri>","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2544-2559"},"PeriodicalIF":0.0,"publicationDate":"2025-04-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143831757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Double Oracle Neural Architecture Search for Game Theoretic Deep Learning Models 博弈论深度学习模型的双Oracle神经结构搜索
Aye Phyu Phyu Aung;Xinrun Wang;Ruiyu Wang;Hau Chan;Bo An;Xiaoli Li;J. Senthilnath
{"title":"Double Oracle Neural Architecture Search for Game Theoretic Deep Learning Models","authors":"Aye Phyu Phyu Aung;Xinrun Wang;Ruiyu Wang;Hau Chan;Bo An;Xiaoli Li;J. Senthilnath","doi":"10.1109/TIP.2025.3558420","DOIUrl":"10.1109/TIP.2025.3558420","url":null,"abstract":"In this paper, we propose a new approach to train deep learning models using game theory concepts including Generative Adversarial Networks (GANs) and Adversarial Training (AT) where we deploy a double-oracle framework using best response oracles. GAN is essentially a two-player zero-sum game between the generator and the discriminator. The same concept can be applied to AT with attacker and classifier as players. Training these models is challenging as a pure Nash equilibrium may not exist and even finding the mixed Nash equilibrium is difficult as training algorithms for both GAN and AT have a large-scale strategy space. Extending our preliminary model DO-GAN, we propose the methods to apply the double oracle framework concept to Adversarial Neural Architecture Search (NAS for GAN) and Adversarial Training (NAS for AT) algorithms. We first generalize the players’ strategies as the trained models of generator and discriminator from the best response oracles. We then compute the meta-strategies using a linear program. For scalability of the framework where multiple network models of best responses are stored in the memory, we prune the weakly-dominated players’ strategies to keep the oracles from becoming intractable. Finally, we conduct experiments on MNIST, CIFAR-10 and TinyImageNet for DONAS-GAN. We also evaluate the robustness under FGSM and PGD attacks on CIFAR-10, SVHN and TinyImageNet for DONAS-AT. We show that all our variants have significant improvements in both subjective qualitative evaluation and quantitative metrics, compared with their respective base architectures.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2463-2472"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-Modal Knowledge Diffusion-Based Generation for Difference-Aware Medical VQA 基于跨模态知识扩散的差分感知医学VQA生成
Qika Lin;Kai He;Yifan Zhu;Fangzhi Xu;Erik Cambria;Mengling Feng
{"title":"Cross-Modal Knowledge Diffusion-Based Generation for Difference-Aware Medical VQA","authors":"Qika Lin;Kai He;Yifan Zhu;Fangzhi Xu;Erik Cambria;Mengling Feng","doi":"10.1109/TIP.2025.3558446","DOIUrl":"10.1109/TIP.2025.3558446","url":null,"abstract":"Multimodal medical applications have garnered considerable attention due to their potential to offer comprehensive and robust support for medical assistance. Specifically, within this domain, difference-aware medical Visual Question Answering (VQA) has emerged as a topic of increasing interest that enables the recognition of changes in physical conditions over time when compared to previous states and provides customized suggestions accordingly. However, it is challenging because samples usually exhibit characteristics of complexity, diversity, and inherent noise. Besides, there is a need for multimodal knowledge understanding of the medical domain. The difference-aware setting requiring image comparison further intensifies these situations. To this end, we propose a cross-Modal knowlEdge diffusioN-baseD gEneration netwoRk (MENDER), where the diffusion mechanism with multi-step denoising and knowledge injection from global to local level are employed to tackle the aforementioned challenges, respectively. The diffusion process is to gradually generate answers with the sequence input of questions, random noises for the answer masks and virtual vision prompts of images. The strategy of answer nosing and knowledge cascading is specifically tailored for this task and is implemented during forward and reverse diffusion processes. Moreover, the visual and structure knowledge injection are proposed to learn virtual vision prompts to guide the diffusion process, where the former is realized using a pre-trained medical image-text network and the latter is modeled with spatial and semantic graph structures processed by the heterogeneous graph Transformer models. Experiment results demonstrate the effectiveness of MENDER for difference-aware medical VQA. Furthermore, it also exhibits notable performance in the low-resource setting and conventional medical VQA tasks.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2421-2434"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Integrating the Space of Reflectance Spectra 对反射光谱空间进行积分
Graham D. Finlayson;Javier Vazquez-Corral;Fufu Fang
{"title":"Integrating the Space of Reflectance Spectra","authors":"Graham D. Finlayson;Javier Vazquez-Corral;Fufu Fang","doi":"10.1109/TIP.2025.3558443","DOIUrl":"10.1109/TIP.2025.3558443","url":null,"abstract":"Color imaging algorithms - such as color correction, spectral estimation and color constancy - are developed and validated with spectral reflectance data. However, the choice of the reflectance data set - used in development and tuning - not only affects the results of these algorithms but it also changes the ranking of the different approaches. We propose that this fragility is because it is difficult to measure/sample enough data to statistically represent the large number of degrees of freedom apparent in spectral reflectances. In this paper, we propose that the space of reflectance data should not be sampled but, rather, integrated. Specifically, we advocate that the convex closure of a reflectance data set - all convex combinations of all spectra - should be used instead of discrete reflectance samples. To make the integration computation tractable, we approximate these convex closures by their enclosing hyper-cube in a privileged coordinate system. We use color correction as an exemplar color imaging problem to demonstrate the utility of our approach.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"34 ","pages":"2588-2601"},"PeriodicalIF":0.0,"publicationDate":"2025-04-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10964071","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143822715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信