International Journal of Computer Vision最新文献

筛选
英文 中文
Temporal Transductive Inference for Few-Shot Video Object Segmentation
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-03-06 DOI: 10.1007/s11263-025-02390-x
Mennatullah Siam
{"title":"Temporal Transductive Inference for Few-Shot Video Object Segmentation","authors":"Mennatullah Siam","doi":"10.1007/s11263-025-02390-x","DOIUrl":"https://doi.org/10.1007/s11263-025-02390-x","url":null,"abstract":"<p>Few-shot video object segmentation (FS-VOS) aims at segmenting video frames using a few labelled examples of classes not seen during initial training. In this paper, we present a simple but effective temporal transductive inference (TTI) approach that leverages temporal consistency in the unlabelled video frames during few-shot inference without episodic training. Key to our approach is the use of a video-level temporal constraint that augments frame-level constraints. The objective of the video-level constraint is to learn consistent linear classifiers for novel classes across the image sequence. It acts as a spatiotemporal regularizer during the transductive inference to increase temporal coherence and reduce overfitting on the few-shot support set. Empirically, our approach outperforms state-of-the-art meta-learning approaches in terms of mean intersection over union on YouTube-VIS by 2.5%. In addition, we introduce an improved benchmark dataset that is exhaustively labelled (i.e., all object occurrences are labelled, unlike the currently available). Our empirical results and temporal consistency analysis confirm the added benefits of the proposed spatiotemporal regularizer to improve temporal coherence. Our code and benchmark dataset is publicly available at, https://github.com/MSiam/tti_fsvos/.\u0000</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"24 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Part-Whole Relational Fusion Towards Multi-Modal Scene Understanding
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-03-06 DOI: 10.1007/s11263-025-02393-8
Yi Liu, Chengxin Li, Shoukun Xu, Jungong Han
{"title":"Part-Whole Relational Fusion Towards Multi-Modal Scene Understanding","authors":"Yi Liu, Chengxin Li, Shoukun Xu, Jungong Han","doi":"10.1007/s11263-025-02393-8","DOIUrl":"https://doi.org/10.1007/s11263-025-02393-8","url":null,"abstract":"<p>Multi-modal fusion has played a vital role in multi-modal scene understanding. Most existing methods focus on cross-modal fusion involving two modalities, often overlooking more complex multi-modal fusion, which is essential for real-world applications like autonomous driving, where visible, depth, event, LiDAR, etc., are used. Besides, few attempts for multi-modal fusion, e.g., simple concatenation, cross-modal attention, and token selection, cannot well dig into the intrinsic shared and specific details of multiple modalities. To tackle the challenge, in this paper, we propose a Part-Whole Relational Fusion (PWRF) framework. For the first time, this framework treats multi-modal fusion as part-whole relational fusion. It routes multiple individual part-level modalities to a fused whole-level modality using the part-whole relational routing ability of Capsule Networks (CapsNets). Through this part-whole routing, our PWRF generates modal-shared and modal-specific semantics from the whole-level modal capsules and the routing coefficients, respectively. On top of that, modal-shared and modal-specific details can be employed to solve the issue of multi-modal scene understanding, including synthetic multi-modal segmentation and visible-depth-thermal salient object detection in this paper. Experiments on several datasets demonstrate the superiority of the proposed PWRF framework for multi-modal scene understanding. The source code has been released on https://github.com/liuyi1989/PWRF.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"33 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143561193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UMSCS: A Novel Unpaired Multimodal Image Segmentation Method Via Cross-Modality Generative and Semi-supervised Learning
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-03-06 DOI: 10.1007/s11263-025-02389-4
Feiyang Yang, Xiongfei Li, Bo Wang, Peihong Teng, Guifeng Liu
{"title":"UMSCS: A Novel Unpaired Multimodal Image Segmentation Method Via Cross-Modality Generative and Semi-supervised Learning","authors":"Feiyang Yang, Xiongfei Li, Bo Wang, Peihong Teng, Guifeng Liu","doi":"10.1007/s11263-025-02389-4","DOIUrl":"https://doi.org/10.1007/s11263-025-02389-4","url":null,"abstract":"<p>Multimodal medical image segmentation is crucial for enhancing diagnostic accuracy in various clinical settings. However, due to the difficulty of obtaining complete data in real clinical settings, the use of unpaired and unlabeled multimodal data is severely limited. This results in unpaired data being unusable as simultaneous input for models due to spatial misalignments and morphological differences, and unlabeled data failing to provide effective supervisory signals for models. To alleviate these issues, we propose a semi-supervised multimodal segmentation method based on cross-modal generative that seamlessly integrates image translation and segmentation stages. In the cross-modalities generative stage, we employ adversarial learning to discern the latent anatomical correlations across various modalities, followed by maintaining a balance between semantic consistency and structural consistency in image translation through region-aware constraints and cross-modal structural information contrastive learning with dynamic weight adjustment. In the segmentation stage, we employ a teacher-student semi-supervised learning (SSL) framework where the student network distills multimodal knowledge from the teacher network and utilizes unlabeled source data to enhance the supervisory signal. Experimental results demonstrate that our proposed method achieves state-of-the-art performance in extensive experiments on the segmentation tasks of cardiac substructures and multi-organs abdominal, outperforming other competitive methods.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"87 1 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143570484","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
METS: Motion-Encoded Time-Surface for Event-Based High-Speed Pose Tracking
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-03-05 DOI: 10.1007/s11263-025-02379-6
Ninghui Xu, Lihui Wang, Zhiting Yao, Takayuki Okatani
{"title":"METS: Motion-Encoded Time-Surface for Event-Based High-Speed Pose Tracking","authors":"Ninghui Xu, Lihui Wang, Zhiting Yao, Takayuki Okatani","doi":"10.1007/s11263-025-02379-6","DOIUrl":"https://doi.org/10.1007/s11263-025-02379-6","url":null,"abstract":"<p>We present a novel event-based representation, named Motion-Encoded Time-Surface (METS), and how it can be used to address the challenge of pose tracking under high-speed scenarios with an event camera. The core concept is dynamically encoding the pixel-wise decay rate of the Time-Surface to account for localized spatio-temporal scene dynamics captured by events, rendering remarkable adaptability with respect to motion dynamics. The consistency between METS and the scene in highly dynamic conditions establishes a reliable foundation for robust pose estimation. Building upon this, we employ a semi-dense 3D-2D alignment pipeline to fully unlock the potential of the event camera for high-speed tracking applications. Given the intrinsic characteristics of METS, we further develop specialized lightweight operations aimed at minimizing the per-event computational cost. The proposed algorithm is successfully evaluated on public datasets and our high-speed motion datasets covering various scenes and motion complexities. It shows that our approach outperforms state-of-the-art pose tracking methods, especially in highly dynamic scenarios, and is capable of tracking accurately under incredibly fast motions that are inaccessible for other event- or frame-based counterparts. Due to its simplicity, our algorithm exhibits outstanding practicality, running at over 70 Hz on a standard CPU.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"44 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-03-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143560822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unknown Support Prototype Set for Open Set Recognition 用于开放集识别的未知支持原型集
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-03-03 DOI: 10.1007/s11263-025-02384-9
Guosong Jiang, Pengfei Zhu, Bing Cao, Dongyue Chen, Qinghua Hu
{"title":"Unknown Support Prototype Set for Open Set Recognition","authors":"Guosong Jiang, Pengfei Zhu, Bing Cao, Dongyue Chen, Qinghua Hu","doi":"10.1007/s11263-025-02384-9","DOIUrl":"https://doi.org/10.1007/s11263-025-02384-9","url":null,"abstract":"<p>In real-world applications, visual recognition systems inevitably encounter unknown classes which are not present in the training set. Open set recognition aims to classify samples from known classes and detect unknowns, simultaneously. One promising solution is to inject unknowns into training sets, and significant progress has been made on how to build an unknowns generator. However, what unknowns exhibit strong generalization is rarely explored. This work presents a new concept called <i>Unknown Support Prototypes</i>, which serve as good representatives for potential unknown classes. Two novel metrics coined <i>Support</i> and <i>Diversity</i> are introduced to construct <i>Unknown Support Prototype Set</i>. In the algorithm, we further propose to construct <i>Unknown Support Prototypes</i> in the semantic subspace of the feature space, which can largely reduce the cardinality of <i>Unknown Support Prototype Set</i> and enhance the reliability of unknowns generation. Extensive experiments on several benchmark datasets demonstrate the proposed algorithm offers effective generalization for unknowns.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"33 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LaMD: Latent Motion Diffusion for Image-Conditional Video Generation
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-03-03 DOI: 10.1007/s11263-025-02386-7
Yaosi Hu, Zhenzhong Chen, Chong Luo
{"title":"LaMD: Latent Motion Diffusion for Image-Conditional Video Generation","authors":"Yaosi Hu, Zhenzhong Chen, Chong Luo","doi":"10.1007/s11263-025-02386-7","DOIUrl":"https://doi.org/10.1007/s11263-025-02386-7","url":null,"abstract":"<p>The video generation field has witnessed rapid improvements with the introduction of recent diffusion models. While these models have successfully enhanced appearance quality, they still face challenges in generating coherent and natural movements while efficiently sampling videos. In this paper, we propose to condense video generation into a problem of motion generation, to improve the expressiveness of motion and make video generation more manageable. This can be achieved by breaking down the video generation process into latent motion generation and video reconstruction. Specifically, we present a latent motion diffusion (LaMD) framework, which consists of a motion-decomposed video autoencoder and a diffusion-based motion generator, to implement this idea. Through careful design, the motion-decomposed video autoencoder can compress patterns in movement into a concise latent motion representation. Consequently, the diffusion-based motion generator is able to efficiently generate realistic motion on a continuous latent space under multi-modal conditions, at a cost that is similar to that of image diffusion models. Results show that LaMD generates high-quality videos on various benchmark datasets, including BAIR, Landscape, NATOPS, MUG and CATER-GEN, that encompass a variety of stochastic dynamics and highly controllable movements on multiple image-conditional video generation tasks, while significantly decreases sampling time.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"34 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-03-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143532572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LMD: Light-Weight Prediction Quality Estimation for Object Detection in Lidar Point Clouds
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-02-28 DOI: 10.1007/s11263-025-02377-8
Tobias Riedlinger, Marius Schubert, Sarina Penquitt, Jan-Marcel Kezmann, Pascal Colling, Karsten Kahl, Lutz Roese-Koerner, Michael Arnold, Urs Zimmermann, Matthias Rottmann
{"title":"LMD: Light-Weight Prediction Quality Estimation for Object Detection in Lidar Point Clouds","authors":"Tobias Riedlinger, Marius Schubert, Sarina Penquitt, Jan-Marcel Kezmann, Pascal Colling, Karsten Kahl, Lutz Roese-Koerner, Michael Arnold, Urs Zimmermann, Matthias Rottmann","doi":"10.1007/s11263-025-02377-8","DOIUrl":"https://doi.org/10.1007/s11263-025-02377-8","url":null,"abstract":"<p>Object detection on Lidar point cloud data is a promising technology for autonomous driving and robotics which has seen a significant rise in performance and accuracy during recent years. Particularly uncertainty estimation is a crucial component for down-stream tasks and deep neural networks remain error-prone even for predictions with high confidence. Previously proposed methods for quantifying prediction uncertainty tend to alter the training scheme of the detector or rely on prediction sampling which results in vastly increased inference time. In order to address these two issues, we propose LidarMetaDetect (LMD), a light-weight post-processing scheme for prediction quality estimation. Our method can easily be added to any pre-trained Lidar object detector without altering anything about the base model and is purely based on post-processing, therefore, only leading to a negligible computational overhead. Our experiments show a significant increase of statistical reliability in separating true from false predictions. We show that this improvement carries over to object detection performance when replacing the objectness score native to the object detector. We propose and evaluate an additional application of our method leading to the detection of annotation errors. Explicit samples and a conservative count of annotation error proposals indicates the viability of our method for large-scale datasets like KITTI and nuScenes. On the widely-used nuScenes test dataset, 43 out of the top 100 proposals of our method indicate, in fact, erroneous annotations.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"32 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143518814","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Realistic Evaluation of Deep Active Learning for Image Classification and Semantic Segmentation
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-02-28 DOI: 10.1007/s11263-025-02372-z
Sudhanshu Mittal, Joshua Niemeijer, Özgün Çiçek, Maxim Tatarchenko, Jan Ehrhardt, Jörg P. Schäfer, Heinz Handels, Thomas Brox
{"title":"Realistic Evaluation of Deep Active Learning for Image Classification and Semantic Segmentation","authors":"Sudhanshu Mittal, Joshua Niemeijer, Özgün Çiçek, Maxim Tatarchenko, Jan Ehrhardt, Jörg P. Schäfer, Heinz Handels, Thomas Brox","doi":"10.1007/s11263-025-02372-z","DOIUrl":"https://doi.org/10.1007/s11263-025-02372-z","url":null,"abstract":"<p>Active learning aims to reduce the high labeling cost involved in training machine learning models on large datasets by efficiently labeling only the most informative samples. Recently, deep active learning has shown success on various tasks. However, the conventional evaluation schemes are either incomplete or below par. This study critically assesses various active learning approaches, identifying key factors essential for choosing the most effective active learning method. It includes a comprehensive guide to obtain the best performance for each case, in image classification and semantic segmentation. For image classification, the AL methods improve by a large-margin when integrated with data augmentation and semi-supervised learning, but barely perform better than the random baseline. In this work, we evaluate them under more realistic settings and propose a more suitable evaluation protocol. For semantic segmentation, previous academic studies focused on diverse datasets with substantial annotation resources. In contrast, data collected in many driving scenarios is highly redundant, and most medical applications are subject to very constrained annotation budgets. The study evaluates active learning techniques under various conditions including data redundancy, the use of semi-supervised learning, and differing annotation budgets. As an outcome of our study, we provide a comprehensive usage guide to obtain the best performance for each case.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"90 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143518627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the Trustworthiness Landscape of State-of-the-art Generative Models: A Survey and Outlook
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-02-28 DOI: 10.1007/s11263-025-02375-w
Mingyuan Fan, Chengyu Wang, Cen Chen, Yang Liu, Jun Huang
{"title":"On the Trustworthiness Landscape of State-of-the-art Generative Models: A Survey and Outlook","authors":"Mingyuan Fan, Chengyu Wang, Cen Chen, Yang Liu, Jun Huang","doi":"10.1007/s11263-025-02375-w","DOIUrl":"https://doi.org/10.1007/s11263-025-02375-w","url":null,"abstract":"<p>Diffusion models and large language models have emerged as leading-edge generative models, revolutionizing various aspects of human life. However, their practical implementation has also exposed inherent risks, bringing to light their potential downsides and sparking concerns about their trustworthiness. Despite the wealth of literature on this subject, a comprehensive survey that specifically delves into the intersection of large-scale generative models and their trustworthiness remains largely absent. To bridge this gap, this paper investigates both long-standing and emerging threats associated with these models across four fundamental dimensions: 1) privacy, 2) security, 3) fairness, and 4) responsibility. Based on our investigation results, we develop an extensive survey that outlines the trustworthiness of large generative models. Following that, we provide practical recommendations and identify promising research directions for generative AI, ultimately promoting the trustworthiness of these models and benefiting society as a whole.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"63 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143518812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fg-T2M++: LLMs-Augmented Fine-Grained Text Driven Human Motion Generation
IF 19.5 2区 计算机科学
International Journal of Computer Vision Pub Date : 2025-02-27 DOI: 10.1007/s11263-025-02392-9
Yin Wang, Mu Li, Jiapeng Liu, Zhiying Leng, Frederick W. B. Li, Ziyao Zhang, Xiaohui Liang
{"title":"Fg-T2M++: LLMs-Augmented Fine-Grained Text Driven Human Motion Generation","authors":"Yin Wang, Mu Li, Jiapeng Liu, Zhiying Leng, Frederick W. B. Li, Ziyao Zhang, Xiaohui Liang","doi":"10.1007/s11263-025-02392-9","DOIUrl":"https://doi.org/10.1007/s11263-025-02392-9","url":null,"abstract":"<p>We address the challenging problem of fine-grained text-driven human motion generation. Existing works generate imprecise motions that fail to accurately capture relationships specified in text due to: (1) lack of effective text parsing for detailed semantic cues regarding body parts, (2) not fully modeling linguistic structures between words to comprehend text comprehensively. To tackle these limitations, we propose a novel fine-grained framework Fg-T2M++ that consists of: (1) an <i>LLMs semantic parsing module</i> to extract body part descriptions and semantics from text, (2) a <i>hyperbolic text representation module</i> to encode relational information between text units by embedding the syntactic dependency graph into hyperbolic space, and (3) a <i>multi-modal fusion module</i> to hierarchically fuse text and motion features. Extensive experiments on HumanML3D and KIT-ML datasets demonstrate that Fg-T2M++ outperforms SOTA methods, validating its ability to accurately generate motions adhering to comprehensive text semantics.</p>","PeriodicalId":13752,"journal":{"name":"International Journal of Computer Vision","volume":"6 1","pages":""},"PeriodicalIF":19.5,"publicationDate":"2025-02-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143506920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信