IEEE transactions on medical imaging最新文献

筛选
英文 中文
Corrections to “Multi-Label Generalized Zero Shot Chest X-Ray Classification By Combining Image-Text Information With Feature Disentanglement” “结合图像-文本信息和特征解纠缠的多标签广义零射胸部x射线分类”的修正
IEEE transactions on medical imaging Pub Date : 2025-04-03 DOI: 10.1109/TMI.2025.3549666
Dwarikanath Mahapatra;Antonio Jimeno Yepes;Behzad Bozorgtabar;Sudipta Roy;Zongyuan Ge;Mauricio Reyes
{"title":"Corrections to “Multi-Label Generalized Zero Shot Chest X-Ray Classification By Combining Image-Text Information With Feature Disentanglement”","authors":"Dwarikanath Mahapatra;Antonio Jimeno Yepes;Behzad Bozorgtabar;Sudipta Roy;Zongyuan Ge;Mauricio Reyes","doi":"10.1109/TMI.2025.3549666","DOIUrl":"10.1109/TMI.2025.3549666","url":null,"abstract":"Presents corrections to the paper, (Corrections to “Multi-Label Generalized Zero Shot Chest X-Ray Classification By Combining Image-Text Information With Feature Disentanglement”).","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1984-1985"},"PeriodicalIF":0.0,"publicationDate":"2025-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10948537","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction. 稀疏视图CT重建级联扩散模型中数据一致性诱导差异的缓解。
IEEE transactions on medical imaging Pub Date : 2025-04-02 DOI: 10.1109/TMI.2025.3557243
Hanyu Chen, Zhixiu Hao, Lin Guo, Liying Xiao
{"title":"Mitigating Data Consistency Induced Discrepancy in Cascaded Diffusion Models for Sparse-view CT Reconstruction.","authors":"Hanyu Chen, Zhixiu Hao, Lin Guo, Liying Xiao","doi":"10.1109/TMI.2025.3557243","DOIUrl":"https://doi.org/10.1109/TMI.2025.3557243","url":null,"abstract":"<p><p>Sparse-view Computed Tomography (CT) image reconstruction is a promising approach to reduce radiation exposure, but it inevitably leads to image degradation. Although diffusion model-based approaches are computationally expensive and suffer from the training-sampling discrepancy, they provide a potential solution to the problem. This study introduces a novel Cascaded Diffusion with Discrepancy Mitigation (CDDM) framework, including the low-quality image generation in latent space and the high-quality image generation in pixel space which contains data consistency and discrepancy mitigation in a one-step reconstruction process. The cascaded framework minimizes computational costs by replacing some inference steps from pixel to latent space. The discrepancy mitigation technique addresses the training-sampling gap induced by data consistency, ensuring the data distribution is close to the original diffusion manifold. A specialized Alternating Direction Method of Multipliers (ADMM) is employed to process image gradients in separate directions, offering a more targeted approach to regularization. Experimental results across several datasets demonstrate CDDM's superior performance in high-quality image generation with clearer boundaries compared to existing methods, highlighting the framework's computational efficiency.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143775216","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Spatio-Temporal and Retrieval-Augmented Modelling for Chest X-Ray Report Generation. 胸部x光报告生成的时空和检索增强模型。
IEEE transactions on medical imaging Pub Date : 2025-03-25 DOI: 10.1109/TMI.2025.3554498
Yan Yang, Xiaoxing You, Ke Zhang, Zhenqi Fu, Xianyun Wang, Jiajun Ding, Jiamei Sun, Zhou Yu, Qingming Huang, Weidong Han, Jun Yu
{"title":"Spatio-Temporal and Retrieval-Augmented Modelling for Chest X-Ray Report Generation.","authors":"Yan Yang, Xiaoxing You, Ke Zhang, Zhenqi Fu, Xianyun Wang, Jiajun Ding, Jiamei Sun, Zhou Yu, Qingming Huang, Weidong Han, Jun Yu","doi":"10.1109/TMI.2025.3554498","DOIUrl":"https://doi.org/10.1109/TMI.2025.3554498","url":null,"abstract":"<p><p>Chest X-ray report generation has attracted increasing research attention. However, most existing methods neglect the temporal information and typically generate reports conditioned on a fixed number of images. In this paper, we propose STREAM: Spatio-Temporal and REtrieval-Augmented Modelling for automatic chest X-ray report generation. It mimics clinical diagnosis by integrating current and historical studies to interpret the present condition (temporal), with each study containing images from multi-views (spatial). Concretely, our STREAM is built upon an encoder-decoder architecture, utilizing a large language model (LLM) as the decoder. Overall, spatio-temporal visual dynamics are packed as visual prompts and regional semantic entities are retrieved as textual prompts. First, a token packer is proposed to capture condensed spatio-temporal visual dynamics, enabling the flexible fusion of images from current and historical studies. Second, to augment the generation with existing knowledge and regional details, a progressive semantic retriever is proposed to retrieve semantic entities from a preconstructed knowledge bank as heuristic text prompts. The knowledge bank is constructed to encapsulate anatomical chest X-ray knowledge into structured entities, each linked to a specific chest region. Extensive experiments on public datasets have shown the state-of-the-art performance of our method. Related codes and the knowledge bank are available at https://github.com/yangyan22/STREAM.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143712667","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Replace2Self: Self-Supervised Denoising based on Voxel Replacing and Image Mixing for Diffusion MRI. 基于体素替换和图像混合的自监督去噪方法。
IEEE transactions on medical imaging Pub Date : 2025-03-18 DOI: 10.1109/TMI.2025.3552611
Linhai Wu, Lihui Wang, Zeyu Deng, Yuemin Zhu, Hongjiang Wei
{"title":"Replace2Self: Self-Supervised Denoising based on Voxel Replacing and Image Mixing for Diffusion MRI.","authors":"Linhai Wu, Lihui Wang, Zeyu Deng, Yuemin Zhu, Hongjiang Wei","doi":"10.1109/TMI.2025.3552611","DOIUrl":"10.1109/TMI.2025.3552611","url":null,"abstract":"<p><p>Low signal to noise ratio (SNR) remains one of the limitations of diffusion weighted (DW) imaging. How to suppress the influence of noise on the subsequent analysis about the tissue microstructure is still challenging. This work proposed a novel self-supervised learning model, Replace2Self, to effectively reduce spatial correlated noise in DW images. Specifically, a voxel replacement strategy based on similar block matching in Q-space was proposed to destroy the correlations of noise in DW image along one diffusion gradient direction. To alleviate the signal gap caused by the voxel replacement, an image mixing strategy based on complementary mask was designed to generate two different noisy DW images. After that, these two noisy DW images were taken as input, and the non-correlated noisy DWimage after voxel replacement was taken as learning target, a denoising network was trained for denoising. To promote the denoising performance, a complementary mask mixing consistency loss and an inverse replacement regularization loss were also proposed. Through the comparisons against several existing DW image denoising methods on extensive simulation data with different noise distributions, noise levels and b-values, as well as the acquisition datasets and the ablation experiments, we verified the effectiveness of the proposed method. Regardless of the noise distribution and noise level, the proposed method achieved the highest PSNR, which was at least 1.9% higher than the suboptimal method when the noise level reaches 10%. Furthermore, our method has superior generalization ability due to the use of the proposed strategies.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143660090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Blood Oxygenation Quantification in Multispectral Photoacoustic Tomography Using A Convex Cone Approach. 采用凸锥方法的多光谱光声断层成像中的血氧定量。
IEEE transactions on medical imaging Pub Date : 2025-03-17 DOI: 10.1109/TMI.2025.3551744
Chuhua Wu, Hongzhi Zuo, Manxiu Cui, Handi Deng, Yuwen Chen, Xuanhao Wang, Bangyan Wang, Cheng Ma
{"title":"Blood Oxygenation Quantification in Multispectral Photoacoustic Tomography Using A Convex Cone Approach.","authors":"Chuhua Wu, Hongzhi Zuo, Manxiu Cui, Handi Deng, Yuwen Chen, Xuanhao Wang, Bangyan Wang, Cheng Ma","doi":"10.1109/TMI.2025.3551744","DOIUrl":"10.1109/TMI.2025.3551744","url":null,"abstract":"<p><p>Multispectral photoacoustic tomography (PAT) can create high spatial and temporal resolution images of oxygen saturation (sO<sub>2</sub>) distribution in deep tissue. However, unknown distributions of photon absorption and scattering introduces complex modulations to the photoacoustic (PA) spectra, dramatically reducing the accuracy of SO<sub>2</sub> quantification. In this study, a rigorous light transport model was employed to unveil that the PA spectra corresponding to distinct SO<sub>2</sub> values can be constrained within separate convex cones (CCs). Based on the CC model, SO<sub>2</sub> estimation is achieved by identifying the CC nearest to the measured data through a modified Gilbert-Johnson-Keerthi (GJK) algorithm. The CC method combines a rigorous physical model with data-driven approach, and shows outstanding robustness in numerical, phantom, and in vivo imaging experiments validated against ground truth measurements. The average SO<sub>2</sub> estimation error is approximately only 3% in in vivo human experiments, underscoring its potential for clinical application. All of our computer codes and data are publicly available on GitHub.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143652826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DenseFormer-MoE: A Dense Transformer Foundation Model with Mixture of Experts for Multi-Task Brain Image Analysis. DenseFormer-MoE:用于多任务脑图像分析的密集变压器混合专家基础模型。
IEEE transactions on medical imaging Pub Date : 2025-03-14 DOI: 10.1109/TMI.2025.3551514
Rizhi Ding, Hui Lu, Manhua Liu
{"title":"DenseFormer-MoE: A Dense Transformer Foundation Model with Mixture of Experts for Multi-Task Brain Image Analysis.","authors":"Rizhi Ding, Hui Lu, Manhua Liu","doi":"10.1109/TMI.2025.3551514","DOIUrl":"10.1109/TMI.2025.3551514","url":null,"abstract":"<p><p>Deep learning models have been widely investigated for computing and analyzing brain images across various downstream tasks such as disease diagnosis and age regression. Most existing models are tailored for specific tasks and diseases, posing a challenge in developing a foundation model for diverse tasks. This paper proposes a Dense Transformer Foundation Model with Mixture of Experts (DenseFormer-MoE), which integrates dense convolutional network, Vision Transformer and Mixture of Experts (MoE) to progressively learn and consolidate local and global features from T1-weighted magnetic resonance images (sMRI) for multiple tasks including diagnosing multiple brain diseases and predicting brain age. First, a foundation model is built by combining the vision Transformer with Densenet, which are pre-trained with Masked Autoencoder and self-supervised learning to enhance the generalization of feature representations. Then, to mitigate optimization conflicts in multi-task learning, MoE is designed to dynamically select the most appropriate experts for each task. Finally, our method is evaluated on multiple renowned brain imaging datasets including UK Biobank (UKB), Alzheimer's Disease Neuroimaging Initiative (ADNI), and Parkinson's Progression Markers Initiative (PPMI). Experimental results and comparison demonstrate that our method achieves promising performances for prediction of brain age and diagnosis of brain diseases.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Speckle Denoising of Dynamic Contrast-enhanced Ultrasound using Low-rank Tensor Decomposition. 基于低秩张量分解的动态超声图像散斑去噪。
IEEE transactions on medical imaging Pub Date : 2025-03-14 DOI: 10.1109/TMI.2025.3551660
Metin Calis, Massimo Mischi, Alle-Jan van der Veen, Borbala Hunyadi
{"title":"Speckle Denoising of Dynamic Contrast-enhanced Ultrasound using Low-rank Tensor Decomposition.","authors":"Metin Calis, Massimo Mischi, Alle-Jan van der Veen, Borbala Hunyadi","doi":"10.1109/TMI.2025.3551660","DOIUrl":"10.1109/TMI.2025.3551660","url":null,"abstract":"<p><p>Dynamic contrast-enhanced ultrasound (DCEUS) is an imaging modality for assessing microvascular perfusion and dispersion kinetics. However, the presence of speckle noise may hamper the quantitative analysis of the contrast kinetics. Common speckle denoising techniques based on low-rank approximations typically model the speckle noise as white Gaussian noise (WGN) after the log transformation and apply matrix-based algorithms. We address the high dimensionality of the 4D DCEUS data and apply low-rank tensor decomposition techniques to denoise speckles. Although there are many tensor decompositions that can describe low rankness, we limit our research to multilinear rank and tubal rank. We introduce a gradient-based extension of the multilinear singular value decomposition to model low multilinear rankness, assuming that the log-transformed speckle noise follows a Fisher-tippet distribution. In addition, we apply an algorithm based on tensor singular value decomposition to model low tubal rankness, assuming that the log-transformed speckle noise is WGN with sparse outliers. The effectiveness of the methods is evaluated through simulations and phantom studies. Additionally, the tensor-based algorithms' real-world performance is assessed using DCEUS prostate recordings. Comparative analyses with existing DCEUS denoising literature are conducted, and the algorithms' capabilities are showcased in the context of prostate cancer classification. The addition of Fisher-tippet distribution did not improve the results of tr-MLSVD in the in vivo case. However, most cancer markers are better distinguishable when using a tensor denoising technique than state-of-the-art approaches.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143631137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
WeakMedSAM: Weakly-Supervised Medical Image Segmentation via SAM with Sub-Class Exploration and Prompt Affinity Mining. WeakMedSAM:基于子类探索和快速关联挖掘的弱监督SAM医学图像分割。
IEEE transactions on medical imaging Pub Date : 2025-03-10 DOI: 10.1109/TMI.2025.3549433
Haoran Wang, Lian Huai, Wenbin Li, Lei Qi, Xingqun Jiang, Yinghuan Shi
{"title":"WeakMedSAM: Weakly-Supervised Medical Image Segmentation via SAM with Sub-Class Exploration and Prompt Affinity Mining.","authors":"Haoran Wang, Lian Huai, Wenbin Li, Lei Qi, Xingqun Jiang, Yinghuan Shi","doi":"10.1109/TMI.2025.3549433","DOIUrl":"https://doi.org/10.1109/TMI.2025.3549433","url":null,"abstract":"<p><p>We have witnessed remarkable progress in foundation models in vision tasks. Currently, several recent works have utilized the segmenting anything model (SAM) to boost the segmentation performance in medical images, where most of them focus on training an adaptor for fine-tuning a large amount of pixel-wise annotated medical images following a fully supervised manner. In this paper, to reduce the labeling cost, we investigate a novel weakly-supervised SAM-based segmentation model, namely WeakMedSAM. Specifically, our proposed WeakMedSAM contains two modules: 1) to mitigate severe co-occurrence in medical images, a sub-class exploration module is introduced to learn accurate feature representations. 2) to improve the quality of the class activation maps, our prompt affinity mining module utilizes the prompt capability of SAM to obtain an affinity map for random-walk refinement. Our method can be applied to any SAM-like backbone, and we conduct experiments with SAMUS and EfficientSAM. The experimental results on three popularlyused benchmark datasets, i.e., BraTS 2019, AbdomenCT-1K, and MSD Cardiac dataset, show the promising results of our proposed WeakMedSAM. Our code is available at https://github.com/wanghr64/WeakMedSAM.</p>","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"PP ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Editorial Flagship Toward the Future
IEEE transactions on medical imaging Pub Date : 2025-02-17 DOI: 10.1109/TMI.2025.3543049
Ge Wang
{"title":"Editorial Flagship Toward the Future","authors":"Ge Wang","doi":"10.1109/TMI.2025.3543049","DOIUrl":"10.1109/TMI.2025.3543049","url":null,"abstract":"This editorial presents the vision and strategic direction of IEEE Transactions on Medical Imaging (TMI) under new leadership. Key points include restructuring the editorial board to enhance efficiency and diversity, streamlining the peer review process to improve decision quality and speed, and launching the <italic>AI for TMI</i> (AI4TMI) initiative to integrate AI in journal management. Through these efforts, TMI aims to sustain excellence, adapt to emerging trends, and shape the future of medical imaging research.","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 3","pages":"1113-1114"},"PeriodicalIF":0.0,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10891575","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143443166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Asymmetric Adaptive Heterogeneous Network for Multi-Modality Medical Image Segmentation 非对称自适应异构网络多模态医学图像分割
IEEE transactions on medical imaging Pub Date : 2025-01-20 DOI: 10.1109/TMI.2025.3526604
Shenhai Zheng;Xin Ye;Chaohui Yang;Lei Yu;Weisheng Li;Xinbo Gao;Yue Zhao
{"title":"Asymmetric Adaptive Heterogeneous Network for Multi-Modality Medical Image Segmentation","authors":"Shenhai Zheng;Xin Ye;Chaohui Yang;Lei Yu;Weisheng Li;Xinbo Gao;Yue Zhao","doi":"10.1109/TMI.2025.3526604","DOIUrl":"10.1109/TMI.2025.3526604","url":null,"abstract":"Existing studies of multi-modality medical image segmentation tend to aggregate all modalities without discrimination and employ multiple symmetric encoders or decoders for feature extraction and fusion. They often overlook the different contributions to visual representation and intelligent decisions among multi-modality images. Motivated by this discovery, this paper proposes an asymmetric adaptive heterogeneous network for multi-modality image feature extraction with modality discrimination and adaptive fusion. For feature extraction, it uses a heterogeneous two-stream asymmetric feature-bridging network to extract complementary features from auxiliary multi-modality and leading single-modality images, respectively. For feature adaptive fusion, the proposed Transformer-CNN Feature Alignment and Fusion (T-CFAF) module enhances the leading single-modality information, and the Cross-Modality Heterogeneous Graph Fusion (CMHGF) module further fuses multi-modality features at a high-level semantic layer adaptively. Comparative evaluation with ten segmentation models on six datasets demonstrates significant efficiency gains as well as highly competitive segmentation accuracy. (Our code is publicly available at <uri>https://github.com/joker-527/AAHN</uri>).","PeriodicalId":94033,"journal":{"name":"IEEE transactions on medical imaging","volume":"44 4","pages":"1836-1852"},"PeriodicalIF":0.0,"publicationDate":"2025-01-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142991455","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信