IEEE Transactions on Pattern Analysis and Machine Intelligence最新文献

筛选
英文 中文
Mask-DiFuser: A Masked Diffusion Model for Unified Unsupervised Image Fusion. Mask-DiFuser:一种用于统一无监督图像融合的mask扩散模型。
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-12 DOI: 10.1109/tpami.2025.3609323
Linfeng Tang,Chunyu Li,Jiayi Ma
{"title":"Mask-DiFuser: A Masked Diffusion Model for Unified Unsupervised Image Fusion.","authors":"Linfeng Tang,Chunyu Li,Jiayi Ma","doi":"10.1109/tpami.2025.3609323","DOIUrl":"https://doi.org/10.1109/tpami.2025.3609323","url":null,"abstract":"The absence of ground truth (GT) in most fusion tasks poses significant challenges for model optimization, evaluation, and generalization. Existing fusion methods achieving complementary context aggregation predominantly rely on hand-crafted fusion rules and sophisticated loss functions, which introduce subjectivity and often fail to adapt to complex real-world scenarios. To address this challenge, we propose Mask-DiFuser, a novel fusion paradigm that ingeniously transforms the unsupervised image fusion task into a dual masked image reconstruction task by incorporating masked image modeling with a diffusion model, overcoming various issues arising from the absence of GT. In particular, we devise a dual masking scheme to simulate complementary information and employ a diffusion model to restore source images from two masked inputs, thereby aggregating complementary contexts. A content encoder with an attention parallel feature mixer is deployed to extract and integrate complementary features, offering local content guidance. Moreover, a semantic encoder is developed to supply global context which is integrated into the diffusion model via a cross-attention mechanism. During inference, Mask-DiFuser begins with a Gaussian distribution and iteratively denoises it conditioned on multi-source images to directly generate fused images. The masked diffusion model, learning priors from high-quality natural images, ensures that fusion results align more closely with human visual perception. Extensive experiments on several fusion tasks, including infrared-visible, medical, multi-exposure, and multi-focus image fusion, demonstrate that Mask-DiFuser significantly outshines SOTA fusion alternatives.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"47 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145043480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Unsupervised Disentanglement of Anatomy and Geometry for Deep Groupwise Image Registration. 深度分组图像配准的贝叶斯无监督解缠方法。
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-12 DOI: 10.1109/tpami.2025.3609521
Xinzhe Luo,Xin Wang,Linda Shapiro,Chun Yuan,Jianfeng Feng,Xiahai Zhuang
{"title":"Bayesian Unsupervised Disentanglement of Anatomy and Geometry for Deep Groupwise Image Registration.","authors":"Xinzhe Luo,Xin Wang,Linda Shapiro,Chun Yuan,Jianfeng Feng,Xiahai Zhuang","doi":"10.1109/tpami.2025.3609521","DOIUrl":"https://doi.org/10.1109/tpami.2025.3609521","url":null,"abstract":"This article presents a general Bayesian learning framework for multi-modal groupwise image registration. The method builds on probabilistic modelling of the image generative process, where the underlying common anatomy and geometric variations of the observed images are explicitly disentangled as latent variables. Therefore, groupwise image registration is achieved via hierarchical Bayesian inference. We propose a novel hierarchical variational auto-encoding architecture to realise the inference procedure of the latent variables, where the registration parameters can be explicitly estimated in a mathematically interpretable fashion. Remarkably, this new paradigm learns groupwise image registration in an unsupervised closed-loop self-reconstruction process, sparing the burden of designing complex image-based similarity measures. The computationally efficient disentangled network architecture is also inherently scalable and flexible, allowing for groupwise registration on large-scale image groups with variable sizes. Furthermore, the inferred structural representations from multi-modal images via disentanglement learning are capable of capturing the latent anatomy of the observations with visual semantics. Extensive experiments were conducted to validate the proposed framework, including four different datasets from cardiac, brain, and abdominal medical images. The results have demonstrated the superiority of our method over conventional similarity-based approaches in terms of accuracy, efficiency, scalability, and interpretability.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"11 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145043482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Unified Perspective for Loss-Oriented Imbalanced Learning via Localization. 基于定位的面向损失的不平衡学习的统一视角。
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-12 DOI: 10.1109/tpami.2025.3609440
Zitai Wang,Qianqian Xu,Zhiyong Yang,Zhikang Xu,Linchao Zhang,Xiaochun Cao,Qingming Huang
{"title":"A Unified Perspective for Loss-Oriented Imbalanced Learning via Localization.","authors":"Zitai Wang,Qianqian Xu,Zhiyong Yang,Zhikang Xu,Linchao Zhang,Xiaochun Cao,Qingming Huang","doi":"10.1109/tpami.2025.3609440","DOIUrl":"https://doi.org/10.1109/tpami.2025.3609440","url":null,"abstract":"Due to the inherent imbalance in real-world datasets, naïve Empirical Risk Minimization (ERM) tends to bias the learning process towards the majority classes, hindering generalization to minority classes. To rebalance the learning process, one straightforward yet effective approach is to modify the loss function via class-dependent terms, such as re-weighting and logit-adjustment. However, existing analysis of these loss-oriented methods remains coarse-grained and fragmented, failing to explain some empirical results. After reviewing prior work, we find that the properties used through their analysis are typically global, i.e., defined over the whole dataset. Hence, these properties fail to effectively capture how class-dependent terms influence the learning process. To bridge this gap, we turn to explore the localized versions of such properties i.e., defined within each class. Specifically, we employ localized calibration to provide consistency validation across a broader range of losses and localized Lipschitz continuity to provide a fine-grained generalization bound. In this way, we reach a unified perspective for improving and adjusting loss-oriented methods. Finally, a principled learning algorithm is developed based on these insights. Empirical results on both traditional ResNets and foundation models validate our theoretical analyses and demonstrate the effectiveness of the proposed method.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"34 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145043483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
M3D: a Multimodal, Multilingual and Multitask Dataset for Grounded Document-level Information Extraction M3D:用于基础文档级信息提取的多模态、多语言和多任务数据集
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-11 DOI: 10.1109/tpami.2025.3609288
Jiang Liu, Bobo Li, Xinran Yang, Na Yang, Hao Fei, Mingyao Zhang, Fei Li, Donghong Ji
{"title":"M3D: a Multimodal, Multilingual and Multitask Dataset for Grounded Document-level Information Extraction","authors":"Jiang Liu, Bobo Li, Xinran Yang, Na Yang, Hao Fei, Mingyao Zhang, Fei Li, Donghong Ji","doi":"10.1109/tpami.2025.3609288","DOIUrl":"https://doi.org/10.1109/tpami.2025.3609288","url":null,"abstract":"","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"61 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reinterpreting Hypergraph Kernels: Insights Through Homomorphism Analysis 重新诠释超图核:通过同态分析的见解
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-11 DOI: 10.1109/tpami.2025.3608902
Yifan Zhang, Shaoyi Du, Yifan Feng, Shihui Ying, Yue Gao
{"title":"Reinterpreting Hypergraph Kernels: Insights Through Homomorphism Analysis","authors":"Yifan Zhang, Shaoyi Du, Yifan Feng, Shihui Ying, Yue Gao","doi":"10.1109/tpami.2025.3608902","DOIUrl":"https://doi.org/10.1109/tpami.2025.3608902","url":null,"abstract":"","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"20 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145035495","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Toward Effective Knowledge Distillation: Navigating Beyond Small-data Pitfall. 走向有效的知识蒸馏:跨越小数据陷阱。
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-09 DOI: 10.1109/tpami.2025.3607982
Zhiwei Hao,Jianyuan Guo,Kai Han,Han Hu,Chang Xu,Yunhe Wang
{"title":"Toward Effective Knowledge Distillation: Navigating Beyond Small-data Pitfall.","authors":"Zhiwei Hao,Jianyuan Guo,Kai Han,Han Hu,Chang Xu,Yunhe Wang","doi":"10.1109/tpami.2025.3607982","DOIUrl":"https://doi.org/10.1109/tpami.2025.3607982","url":null,"abstract":"The spectacular success of training large models on extensive datasets highlights the potential of scaling up for exceptional performance. To deploy these models on edge devices, knowledge distillation (KD) is commonly used to create a compact model from a larger, pretrained teacher model. However, as models and datasets rapidly scale up in practical applications, it is crucial to consider the applicability of existing KD approaches originally designed for limited-capacity architectures and small-scale datasets. In this paper, we revisit current KD methods and identify the presence of a small-data pitfall, where most modifications to vanilla KD prove ineffective on large-scale datasets. To guide the design of consistently effective KD methods across different data scales, we conduct a meticulous evaluation of the knowledge transfer process. Our findings reveal that incorporating more useful information is crucial for achieving consistently effective KD methods, while modifications in loss functions show relatively less significance. In light of this, we present a paradigmatic example that combines vanilla KD with deep supervision, incorporating additional information into the student during distillation. This approach surpasses almost all recent KD methods. We believe our study will offer valuable insights to guide the community in navigating beyond the small-data pitfall and toward consistently effective KD.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"32 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025290","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sentence-level Relation Semantics Learning via Contrastive Sentences. 基于对比句的句子级关系语义学习。
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-09 DOI: 10.1109/tpami.2025.3607794
Bowen Xing,Ivor W Tsang
{"title":"Sentence-level Relation Semantics Learning via Contrastive Sentences.","authors":"Bowen Xing,Ivor W Tsang","doi":"10.1109/tpami.2025.3607794","DOIUrl":"https://doi.org/10.1109/tpami.2025.3607794","url":null,"abstract":"Sentence-level semantics plays a key role in language understanding. There exist subtle relations and dependencies among sentence-level samples, which is to be exploited. For example, in relational triple extraction, existing models overemphasize extraction modules, ignoring the sentence-level semantics and relation information, which causes (1) the semantics fed to extraction modules is relation-unaware; (2) each sample is trained individually without considering inter-sample dependency. To address these issues, we first propose the model-agnostic multi-relation detection task, which incorporates relation information into text encoding to generate the relation-aware semantics. Then we propose the model-agnostic multi-relation supervised contrastive learning, which leverages the relation-derived inter-sample dependencies as a supervised signal to learn discriminative semantics via drawing together or pushing away the sentence-level semantics regarding whether they share the same/similar relations. Besides, we design the reverse label frequency weighting and hierarchical label embedding mechanisms to alleviate label imbalance and integrate relation hierarchy. Our method can be applied to any RTE model and we conduct extensive experiments on five backbones by augmenting them with our method. Experimental results on four public benchmarks show that our method can bring significant and consistent improvements to various backbones and model analysis further verify the effectiveness of our method.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"1 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transfer Learning of Stochastic Kriging for Individualized Prediction. 个性化预测的随机克里格迁移学习。
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-09 DOI: 10.1109/tpami.2025.3607773
Jinwei Yao,Jianguo Wu,Yongxiang Li,Chao Wang
{"title":"Transfer Learning of Stochastic Kriging for Individualized Prediction.","authors":"Jinwei Yao,Jianguo Wu,Yongxiang Li,Chao Wang","doi":"10.1109/tpami.2025.3607773","DOIUrl":"https://doi.org/10.1109/tpami.2025.3607773","url":null,"abstract":"Stochastic Kriging (SK) is a generalized variant of Gaussian process regression, and it is developed for dealing with non-i.i.d. noise in functional responses. Although SK has achieved substantial success in various engineering applications, its intrinsic modeling strategy by focusing on the sample mean limits its flexibility and capability of predicting individual functional samples. Moreover, the performance of SK can be impaired under scarce data scenarios, which are commonly encountered in engineering applications, especially for start-up or just deployed systems. In this paper, we propose a novel transfer learning framework to address the challenges of individualization and data scarcity in traditional SK. The proposed framework features a within-process model to facilitate individualized prediction and a between-process model to leverage information from related processes for resolving the issue of data scarcity. The within- and between-process models are integrated through a tailored convolution process, which quantifies interactions within and between processes using a specially designed covariance matrix and corresponding kernel parameters. Statistical properties are investigated on the parameter estimation of the proposed framework, which provide theoretical guarantees for the performance of transfer learning. The proposed method is compared with benchmark methods through various numerical and real case studies, and the results demonstrate the superiority of the proposed method in dealing with individualized prediction of functional responses, especially when limited data are available in the process of interest.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"35 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combo: Co-speech holistic 3D human motion generation and efficient customizable adaptation in harmony. Combo:协同语音整体3D人体运动生成和高效的自定义适应和谐。
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-09 DOI: 10.1109/tpami.2025.3607711
Chao Xu,Mingze Sun,Zhi-Qi Cheng,Fei Wang,Yang Liu,Baigui Sun,Ruqi Huang,Alexander Hauptmann
{"title":"Combo: Co-speech holistic 3D human motion generation and efficient customizable adaptation in harmony.","authors":"Chao Xu,Mingze Sun,Zhi-Qi Cheng,Fei Wang,Yang Liu,Baigui Sun,Ruqi Huang,Alexander Hauptmann","doi":"10.1109/tpami.2025.3607711","DOIUrl":"https://doi.org/10.1109/tpami.2025.3607711","url":null,"abstract":"In this paper, we propose a novel framework, Combo, for harmonious co-speech holistic 3D human motion generation and efficient customizable adaption. In particular, we identify that one fundamental challenge as the multiple-input-multiple-output (MIMO) nature of the generative model of interest. More concretely, on the input end, the model typically consumes both speech signals and character guidance (e.g., identity and emotion), which hinders further adaptation to varying guidance; on the output end, holistic human motions mainly consist of facial expressions and body movements, which are inherently correlated but non-trivial to coordinate in current data-driven generation process. In response to the above challenge, we propose tailored designs to both ends. For the former, we propose to pre-train on data regarding a fixed identity with neutral emotion, and defer the incorporation of customizable conditions (identity and emotion) to fine-tuning stage, which is boosted by our novel X-Adapter for parameter-efficient fine-tuning. For the latter, we propose a simple yet effective transformer design, DU-Trans, which first divides into two branches to learn individual features of face expression and body movements, and then unites those to learn a joint bi-directional distribution and directly predicts combined coefficients. Evaluated on BEAT2 and SHOW datasets, Combo is highly effective in generating high-quality motions but also efficient in transferring identity and emotion. Project website: https://xc-csc101.github.io/combo/.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"56 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025288","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmenting the motion components of a video: A long-term unsupervised model. 分割视频的运动成分:一个长期的无监督模型。
IF 23.6 1区 计算机科学
IEEE Transactions on Pattern Analysis and Machine Intelligence Pub Date : 2025-09-09 DOI: 10.1109/tpami.2025.3608065
Etienne Meunier,Patrick Bouthemy
{"title":"Segmenting the motion components of a video: A long-term unsupervised model.","authors":"Etienne Meunier,Patrick Bouthemy","doi":"10.1109/tpami.2025.3608065","DOIUrl":"https://doi.org/10.1109/tpami.2025.3608065","url":null,"abstract":"Human beings have the ability to continuously analyze a video and immediately extract the motion components. We want to adopt this paradigm to provide a coherent and stable motion segmentation over the video sequence. In this perspective, we propose a novel long-term spatio-temporal model operating in a totally unsupervised way. It takes as input the volume of consecutive optical flow (OF) fields, and delivers a volume of segments of coherent motion over the video. More specifically, we have designed a transformer-based network, where we leverage a mathematically well-founded framework, the Evidence Lower Bound (ELBO), to derive the loss function. The loss function combines a flow reconstruction term involving spatio-temporal parametric motion models combining, in a novel way, polynomial (quadratic) motion models for the spatial dimensions and B-splines for the time dimension of the video sequence, and a regularization term enforcing temporal consistency on the segments. We report experiments on four VOS benchmarks, demonstrating competitive quantitative results while performing motion segmentation on a sequence in one go. We also highlight through visual results the key contributions on temporal consistency brought by our method.","PeriodicalId":13426,"journal":{"name":"IEEE Transactions on Pattern Analysis and Machine Intelligence","volume":"115 1","pages":""},"PeriodicalIF":23.6,"publicationDate":"2025-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145025293","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信