Medical image analysis最新文献

筛选
英文 中文
Multi-contrast image super-resolution with deformable attention and neighborhood-based feature aggregation (DANCE): Applications in anatomic and metabolic MRI 利用可变形注意力和基于邻域的特征聚合(DANCE)实现多对比图像超分辨率:在解剖和代谢磁共振成像中的应用。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-30 DOI: 10.1016/j.media.2024.103359
{"title":"Multi-contrast image super-resolution with deformable attention and neighborhood-based feature aggregation (DANCE): Applications in anatomic and metabolic MRI","authors":"","doi":"10.1016/j.media.2024.103359","DOIUrl":"10.1016/j.media.2024.103359","url":null,"abstract":"<div><div>Multi-contrast magnetic resonance imaging (MRI) reflects information about human tissues from different perspectives and has wide clinical applications. By utilizing the auxiliary information from reference images (Refs) in the easy-to-obtain modality, multi-contrast MRI super-resolution (SR) methods can synthesize high-resolution (HR) images from their low-resolution (LR) counterparts in the hard-to-obtain modality. In this study, we systematically discussed the potential impacts caused by cross-modal misalignments between LRs and Refs and, based on this discussion, proposed a novel deep-learning-based method with <strong>D</strong>eformable <strong>A</strong>ttention and <strong>N</strong>eighborhood-based feature aggregation to be <strong>C</strong>omputationally <strong>E</strong>fficient (DANCE) and insensitive to misalignments. Our method has been evaluated in two public MRI datasets, i.e., IXI and FastMRI, and an in-house MR metabolic imaging dataset with amide proton transfer weighted (APTW) images. Experimental results reveal that our method consistently outperforms baselines in various scenarios, with significant superiority observed in the misaligned group of IXI dataset and the prospective study of the clinical dataset. The robustness study proves that our method is insensitive to misalignments, maintaining an average PSNR of 30.67 dB when faced with a maximum range of ±9°and ±9 pixels of rotation and translation on Refs. Given our method’s desirable comprehensive performance, good robustness, and moderate computational complexity, it possesses substantial potential for clinical applications.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142391705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Flow-based Truncated Denoising Diffusion Model for super-resolution Magnetic Resonance Spectroscopic Imaging 用于超分辨率磁共振波谱成像的流式截断去噪扩散模型
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-27 DOI: 10.1016/j.media.2024.103358
{"title":"A Flow-based Truncated Denoising Diffusion Model for super-resolution Magnetic Resonance Spectroscopic Imaging","authors":"","doi":"10.1016/j.media.2024.103358","DOIUrl":"10.1016/j.media.2024.103358","url":null,"abstract":"<div><div>Magnetic Resonance Spectroscopic Imaging (MRSI) is a non-invasive imaging technique for studying metabolism and has become a crucial tool for understanding neurological diseases, cancers and diabetes. High spatial resolution MRSI is needed to characterize lesions, but in practice MRSI is acquired at low resolution due to time and sensitivity restrictions caused by the low metabolite concentrations. Therefore, there is an imperative need for a post-processing approach to generate high-resolution MRSI from low-resolution data that can be acquired fast and with high sensitivity. Deep learning-based super-resolution methods provided promising results for improving the spatial resolution of MRSI, but they still have limited capability to generate accurate and high-quality images. Recently, diffusion models have demonstrated superior learning capability than other generative models in various tasks, but sampling from diffusion models requires iterating through a large number of diffusion steps, which is time-consuming. This work introduces a Flow-based Truncated Denoising Diffusion Model (FTDDM) for super-resolution MRSI, which shortens the diffusion process by truncating the diffusion chain, and the truncated steps are estimated using a normalizing flow-based network. The network is conditioned on upscaling factors to enable multi-scale super-resolution. To train and evaluate the deep learning models, we developed a <sup>1</sup>H-MRSI dataset acquired from 25 high-grade glioma patients. We demonstrate that FTDDM outperforms existing generative models while speeding up the sampling process by over 9-fold compared to the baseline diffusion model. Neuroradiologists’ evaluations confirmed the clinical advantages of our method, which also supports uncertainty estimation and sharpness adjustment, extending its potential clinical applications.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142356947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Label refinement network from synthetic error augmentation for medical image segmentation 用于医学图像分割的合成误差增强标签细化网络
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-27 DOI: 10.1016/j.media.2024.103355
{"title":"Label refinement network from synthetic error augmentation for medical image segmentation","authors":"","doi":"10.1016/j.media.2024.103355","DOIUrl":"10.1016/j.media.2024.103355","url":null,"abstract":"<div><div>Deep convolutional neural networks for image segmentation do not learn the label structure explicitly and may produce segmentations with an incorrect structure, e.g., with disconnected cylindrical structures in the segmentation of tree-like structures such as airways or blood vessels. In this paper, we propose a novel label refinement method to correct such errors from an initial segmentation, implicitly incorporating information about label structure. This method features two novel parts: (1) a model that generates synthetic structural errors, and (2) a label appearance simulation network that produces segmentations with synthetic errors that are similar in appearance to the real initial segmentations. Using these segmentations with synthetic errors and the original images, the label refinement network is trained to correct errors and improve the initial segmentations. The proposed method is validated on two segmentation tasks: airway segmentation from chest computed tomography (CT) scans and brain vessel segmentation from 3D CT angiography (CTA) images of the brain. In both applications, our method significantly outperformed a standard 3D U-Net, four previous label refinement methods, and a U-Net trained with a loss tailored for tubular structures. Improvements are even larger when additional unlabeled data is used for model training. In an ablation study, we demonstrate the value of the different components of the proposed method.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142378057","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration 用于解剖学引导的头颈部 CT 可变形配准的 MUsculo-Skeleton-Aware (MUSA) 深度学习。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-21 DOI: 10.1016/j.media.2024.103351
{"title":"MUsculo-Skeleton-Aware (MUSA) deep learning for anatomically guided head-and-neck CT deformable registration","authors":"","doi":"10.1016/j.media.2024.103351","DOIUrl":"10.1016/j.media.2024.103351","url":null,"abstract":"<div><div>Deep-learning-based deformable image registration (DL-DIR) has demonstrated improved accuracy compared to time-consuming non-DL methods across various anatomical sites. However, DL-DIR is still challenging in heterogeneous tissue regions with large deformation. In fact, several state-of-the-art DL-DIR methods fail to capture the large, anatomically plausible deformation when tested on head-and-neck computed tomography (CT) images. These results allude to the possibility that such complex head-and-neck deformation may be beyond the capacity of a single network structure or a homogeneous smoothness regularization. To address the challenge of combined multi-scale musculoskeletal motion and soft tissue deformation in the head-and-neck region, we propose a MUsculo-Skeleton-Aware (MUSA) framework to anatomically guide DL-DIR by leveraging the explicit multiresolution strategy and the inhomogeneous deformation constraints between the bony structures and soft tissue. The proposed method decomposes the complex deformation into a bulk posture change and residual fine deformation. It can accommodate both inter- and intra- subject registration. Our results show that the MUSA framework can consistently improve registration accuracy and, more importantly, the plausibility of deformation for various network architectures. The code will be publicly available at <span><span>https://github.com/HengjieLiu/DIR-MUSA</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142400703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DeepResBat: Deep residual batch harmonization accounting for covariate distribution differences DeepResBat:考虑协变量分布差异的深度残差批量协调。
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-21 DOI: 10.1016/j.media.2024.103354
{"title":"DeepResBat: Deep residual batch harmonization accounting for covariate distribution differences","authors":"","doi":"10.1016/j.media.2024.103354","DOIUrl":"10.1016/j.media.2024.103354","url":null,"abstract":"<div><div>Pooling MRI data from multiple datasets requires harmonization to reduce undesired inter-site variabilities, while preserving effects of biological variables (or covariates). The popular harmonization approach ComBat uses a mixed effect regression framework that explicitly accounts for covariate distribution differences across datasets. There is also significant interest in developing harmonization approaches based on deep neural networks (DNNs), such as conditional variational autoencoder (cVAE). However, current DNN approaches do not explicitly account for covariate distribution differences across datasets. Here, we provide mathematical results, suggesting that not accounting for covariates can lead to suboptimal harmonization. We propose two DNN-based covariate-aware harmonization approaches: covariate VAE (coVAE) and DeepResBat. The coVAE approach is a natural extension of cVAE by concatenating covariates and site information with site- and covariate-invariant latent representations. DeepResBat adopts a residual framework inspired by ComBat. DeepResBat first removes the effects of covariates with nonlinear regression trees, followed by eliminating site differences with cVAE. Finally, covariate effects are added back to the harmonized residuals. Using three datasets from three continents with a total of 2787 participants and 10,085 anatomical T1 scans, we find that DeepResBat and coVAE outperformed ComBat, CovBat and cVAE in terms of removing dataset differences, while enhancing biological effects of interest. However, coVAE hallucinates spurious associations between anatomical MRI and covariates even when no association exists. Future studies proposing DNN-based harmonization approaches should be aware of this false positive pitfall. Overall, our results suggest that DeepResBat is an effective deep learning alternative to ComBat. Code for DeepResBat can be found here: <span><span>https://github.com/ThomasYeoLab/CBIG/tree/master/stable_projects/harmonization/An2024_DeepResBat</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142378046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PSFHS challenge report: Pubic symphysis and fetal head segmentation from intrapartum ultrasound images PSFHS 挑战报告:从产后超声图像中分割耻骨联合和胎儿头部
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-21 DOI: 10.1016/j.media.2024.103353
{"title":"PSFHS challenge report: Pubic symphysis and fetal head segmentation from intrapartum ultrasound images","authors":"","doi":"10.1016/j.media.2024.103353","DOIUrl":"10.1016/j.media.2024.103353","url":null,"abstract":"<div><div>Segmentation of the fetal and maternal structures, particularly intrapartum ultrasound imaging as advocated by the International Society of Ultrasound in Obstetrics and Gynecology (ISUOG) for monitoring labor progression, is a crucial first step for quantitative diagnosis and clinical decision-making. This requires specialized analysis by obstetrics professionals, in a task that i) is highly time- and cost-consuming and ii) often yields inconsistent results. The utility of automatic segmentation algorithms for biometry has been proven, though existing results remain suboptimal. To push forward advancements in this area, the Grand Challenge on Pubic Symphysis-Fetal Head Segmentation (PSFHS) was held alongside the 26th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2023). This challenge aimed to enhance the development of automatic segmentation algorithms at an international scale, providing the largest dataset to date with 5,101 intrapartum ultrasound images collected from two ultrasound machines across three hospitals from two institutions. The scientific community's enthusiastic participation led to the selection of the top 8 out of 179 entries from 193 registrants in the initial phase to proceed to the competition's second stage. These algorithms have elevated the state-of-the-art in automatic PSFHS from intrapartum ultrasound images. A thorough analysis of the results pinpointed ongoing challenges in the field and outlined recommendations for future work. The top solutions and the complete dataset remain publicly available, fostering further advancements in automatic segmentation and biometry for intrapartum ultrasound imaging.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142327456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fourier Convolution Block with global receptive field for MRI reconstruction 用于磁共振成像重建的具有全局感受野的傅立叶卷积块
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-20 DOI: 10.1016/j.media.2024.103349
{"title":"Fourier Convolution Block with global receptive field for MRI reconstruction","authors":"","doi":"10.1016/j.media.2024.103349","DOIUrl":"10.1016/j.media.2024.103349","url":null,"abstract":"<div><p>Reconstructing images from under-sampled Magnetic Resonance Imaging (MRI) signals significantly reduces scan time and improves clinical practice. However, Convolutional Neural Network (CNN)-based methods, while demonstrating great performance in MRI reconstruction, may face limitations due to their restricted receptive field (RF), hindering the capture of global features. This is particularly crucial for reconstruction, as aliasing artifacts are distributed globally. Recent advancements in Vision Transformers have further emphasized the significance of a large RF. In this study, we proposed a novel global Fourier Convolution Block (FCB) with whole image RF and low computational complexity by transforming the regular spatial domain convolutions into frequency domain. Visualizations of the effective RF and trained kernels demonstrated that FCB improves the RF of reconstruction models in practice. The proposed FCB was evaluated on four popular CNN architectures using brain and knee MRI datasets. Models with FCB achieved superior PSNR and SSIM than baseline models and exhibited more details and texture recovery. The code is publicly available at <span><span>https://github.com/Haozhoong/FCB</span><svg><path></path></svg></span>.</p></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142272920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Re-identification from histopathology images 组织病理学图像的再识别
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103335
{"title":"Re-identification from histopathology images","authors":"","doi":"10.1016/j.media.2024.103335","DOIUrl":"10.1016/j.media.2024.103335","url":null,"abstract":"<div><div>In numerous studies, deep learning algorithms have proven their potential for the analysis of histopathology images, for example, for revealing the subtypes of tumors or the primary origin of metastases. These models require large datasets for training, which must be anonymized to prevent possible patient identity leaks. This study demonstrates that even relatively simple deep learning algorithms can re-identify patients in large histopathology datasets with substantial accuracy. In addition, we compared a comprehensive set of state-of-the-art whole slide image classifiers and feature extractors for the given task. We evaluated our algorithms on two TCIA datasets including lung squamous cell carcinoma (LSCC) and lung adenocarcinoma (LUAD). We also demonstrate the algorithm’s performance on an in-house dataset of meningioma tissue. We predicted the source patient of a slide with <span><math><msub><mrow><mi>F</mi></mrow><mrow><mn>1</mn></mrow></msub></math></span> scores of up to 80.1% and 77.19% on the LSCC and LUAD datasets, respectively, and with 77.09% on our meningioma dataset. Based on our findings, we formulated a risk assessment scheme to estimate the risk to the patient’s privacy prior to publication.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1361841524002603/pdfft?md5=6efea46ba696d683bc55409496e68f7b&pid=1-s2.0-S1361841524002603-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UM-Net: Rethinking ICGNet for polyp segmentation with uncertainty modeling UM-Net:利用不确定性建模反思息肉分割 ICGNet
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103347
{"title":"UM-Net: Rethinking ICGNet for polyp segmentation with uncertainty modeling","authors":"","doi":"10.1016/j.media.2024.103347","DOIUrl":"10.1016/j.media.2024.103347","url":null,"abstract":"<div><div>Automatic segmentation of polyps from colonoscopy images plays a critical role in the early diagnosis and treatment of colorectal cancer. Nevertheless, some bottlenecks still exist. In our previous work, we mainly focused on polyps with intra-class inconsistency and low contrast, using ICGNet to solve them. Due to the different equipment, specific locations and properties of polyps, the color distribution of the collected images is inconsistent. ICGNet was designed primarily with reverse-contour guide information and local–global context information, ignoring this inconsistent color distribution, which leads to overfitting problems and makes it difficult to focus only on beneficial image content. In addition, a trustworthy segmentation model should not only produce high-precision results but also provide a measure of uncertainty to accompany its predictions so that physicians can make informed decisions. However, ICGNet only gives the segmentation result and lacks the uncertainty measure. To cope with these novel bottlenecks, we further extend the original ICGNet to a comprehensive and effective network (UM-Net) with two main contributions that have been proved by experiments to have substantial practical value. Firstly, we employ a color transfer operation to weaken the relationship between color and polyps, making the model more concerned with the shape of the polyps. Secondly, we provide the uncertainty to represent the reliability of the segmentation results and use variance to rectify uncertainty. Our improved method is evaluated on five polyp datasets, which shows competitive results compared to other advanced methods in both learning ability and generalization capability. The source code is available at <span><span>https://github.com/dxqllp/UM-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142312994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Maxillofacial bone movements-aware dual graph convolution approach for postoperative facial appearance prediction 用于术后面部外观预测的颌面骨运动感知双图卷积法
IF 10.7 1区 医学
Medical image analysis Pub Date : 2024-09-19 DOI: 10.1016/j.media.2024.103350
{"title":"Maxillofacial bone movements-aware dual graph convolution approach for postoperative facial appearance prediction","authors":"","doi":"10.1016/j.media.2024.103350","DOIUrl":"10.1016/j.media.2024.103350","url":null,"abstract":"<div><div>Postoperative facial appearance prediction is vital for surgeons to make orthognathic surgical plans and communicate with patients. Conventional biomechanical prediction methods require heavy computations and time-consuming manual operations which hamper their clinical practice. Deep learning based methods have shown the potential to improve computational efficiency and achieve comparable accuracy. However, existing deep learning based methods only learn facial features from facial point clouds and process regional points independently, which has constrains in perceiving facial surface details and topology. In addition, they predict postoperative displacements for all facial points in one step, which is vulnerable to weakly supervised training and easy to produce distorted predictions. To alleviate these limitations, we propose a novel dual graph convolution based postoperative facial appearance prediction model which considers the surface geometry by learning on two graphs constructed from the facial mesh in the Euclidean and geodesic spaces, and transfers the bone movements to facial movements in dual spaces. We further adopt a coarse-to-fine strategy which performs coarse predictions for facial meshes with fewer vertices and then adds more to obtain more robust fine predictions. Experiments on real clinical data demonstrate that our method outperforms state-of-the-art deep learning based methods qualitatively and quantitatively.</div></div>","PeriodicalId":18328,"journal":{"name":"Medical image analysis","volume":null,"pages":null},"PeriodicalIF":10.7,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142322648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信