Journal of Imaging最新文献

筛选
英文 中文
Direct Distillation: A Novel Approach for Efficient Diffusion Model Inference. 直接蒸馏:高效扩散模型推断的新方法
IF 2.7
Journal of Imaging Pub Date : 2025-02-19 DOI: 10.3390/jimaging11020066
Zilai Li, Rongkai Zhang
{"title":"Direct Distillation: A Novel Approach for Efficient Diffusion Model Inference.","authors":"Zilai Li, Rongkai Zhang","doi":"10.3390/jimaging11020066","DOIUrl":"10.3390/jimaging11020066","url":null,"abstract":"<p><p>Diffusion models are among the most common techniques used for image generation, having achieved state-of-the-art performance by implementing auto-regressive algorithms. However, multi-step inference processes are typically slow and require extensive computational resources. To address this issue, we propose the use of an information bottleneck to reschedule inference using a new sampling strategy, which employs a lightweight distilled neural network to map intermediate stages to the final output. This approach reduces the number of iterations and FLOPS required for inference while ensuring the diversity of generated images. A series of validation experiments were conducted involving the COCO dataset as well as the LAION dataset and two proposed distillation models, requiring 57.5 million and 13.5 million parameters, respectively. Results showed that these models were able to bypass 40-50% of the inference steps originally required by a stable U-Net diffusion model, which included 859 million parameters. In the original sampling process, each inference step required 67,749 million multiply-accumulate operations (MACs), while our two distillate models only required 3954 million MACs and 3922 million MACs per inference step. In addition, our distillation algorithm produced a Fréchet inception distance (FID) of 16.75 in eight steps, which was remarkably lower than those of the progressive distillation, adversarial distillation, and DDIM solver algorithms, which produced FID values of 21.0, 30.0, 22.3, and 24.0, respectively. Notably, this process did not require parameters from the original diffusion model to establish a new distillation model prior to training. Information theory was used to further analyze primary bottlenecks in the FID results of existing distillation algorithms, demonstrating that both GANs and typical distillation failed to achieve generative diversity while implicitly studying incorrect posterior probability distributions. Meanwhile, we use information theory to analyze the latest distillation models including LCM-SDXL, SDXL-Turbo, SDXL-Lightning, DMD, and MSD, which reveals the basic reason for the diversity problem confronted by them, and compare those distillation models with our algorithm in the FID and CLIP Score.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856141/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Efficient Forest Smoke Detection Approach Using Convolutional Neural Networks and Attention Mechanisms.
IF 2.7
Journal of Imaging Pub Date : 2025-02-19 DOI: 10.3390/jimaging11020067
Quy-Quyen Hoang, Quy-Lam Hoang, Hoon Oh
{"title":"An Efficient Forest Smoke Detection Approach Using Convolutional Neural Networks and Attention Mechanisms.","authors":"Quy-Quyen Hoang, Quy-Lam Hoang, Hoon Oh","doi":"10.3390/jimaging11020067","DOIUrl":"10.3390/jimaging11020067","url":null,"abstract":"<p><p>This study explores a method of detecting smoke plumes effectively as the early sign of a forest fire. Convolutional neural networks (CNNs) have been widely used for forest fire detection; however, they have not been customized or optimized for smoke characteristics. This paper proposes a CNN-based forest smoke detection model featuring novel backbone architecture that can increase detection accuracy and reduce computational load. Since the proposed backbone detects the plume of smoke through different views using kernels of varying sizes, it can better detect smoke plumes of different sizes. By decomposing the traditional square kernel convolution into a depth-wise convolution of the coordinate kernel, it can not only better extract the features of the smoke plume spreading along the vertical dimension but also reduce the computational load. An attention mechanism was applied to allow the model to focus on important information while suppressing less relevant information. The experimental results show that our model outperforms other popular ones by achieving detection accuracy of up to 52.9 average precision (AP) and significantly reduces the number of parameters and giga floating-point operations (GFLOPs) compared to the popular models.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856251/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of Data Capture Methods on 3D Reconstruction with Gaussian Splatting. 数据采集方法对高斯拼接三维重建的影响
IF 2.7
Journal of Imaging Pub Date : 2025-02-18 DOI: 10.3390/jimaging11020065
Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen, Radoslav Miltchev
{"title":"Impact of Data Capture Methods on 3D Reconstruction with Gaussian Splatting.","authors":"Dimitar Rangelov, Sierd Waanders, Kars Waanders, Maurice van Keulen, Radoslav Miltchev","doi":"10.3390/jimaging11020065","DOIUrl":"10.3390/jimaging11020065","url":null,"abstract":"<p><p>This study examines how different filming techniques can enhance the quality of 3D reconstructions with a particular focus on their use in indoor crime scene investigations. Using Neural Radiance Fields (NeRF) and Gaussian Splatting, we explored how factors like camera orientation, filming speed, data layering, and scanning path affect the detail and clarity of 3D reconstructions. Through experiments in a mock crime scene apartment, we identified optimal filming methods that reduce noise and artifacts, delivering clearer and more accurate reconstructions. Filming in landscape mode, at a slower speed, with at least three layers and focused on key objects produced the most effective results. These insights provide valuable guidelines for professionals in forensics, architecture, and cultural heritage preservation, helping them capture realistic high-quality 3D representations. This study also highlights the potential for future research to expand on these findings by exploring other algorithms, camera parameters, and real-time adjustment techniques.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11855968/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Non-Hospitalized Long COVID Patients Exhibit Reduced Retinal Capillary Perfusion: A Prospective Cohort Study. 非住院长COVID患者视网膜毛细血管灌注减少:一项前瞻性队列研究
IF 2.7
Journal of Imaging Pub Date : 2025-02-17 DOI: 10.3390/jimaging11020062
Clayton E Lyons, Jonathan Alhalel, Anna Busza, Emily Suen, Nathan Gill, Nicole Decker, Stephen Suchy, Zachary Orban, Millenia Jimenez, Gina Perez Giraldo, Igor J Koralnik, Manjot K Gill
{"title":"Non-Hospitalized Long COVID Patients Exhibit Reduced Retinal Capillary Perfusion: A Prospective Cohort Study.","authors":"Clayton E Lyons, Jonathan Alhalel, Anna Busza, Emily Suen, Nathan Gill, Nicole Decker, Stephen Suchy, Zachary Orban, Millenia Jimenez, Gina Perez Giraldo, Igor J Koralnik, Manjot K Gill","doi":"10.3390/jimaging11020062","DOIUrl":"10.3390/jimaging11020062","url":null,"abstract":"<p><p>The mechanism of post-acute sequelae of SARS-CoV-2 (PASC) is unknown. Using optical coherence tomography angiography (OCT-A), we compared retinal foveal avascular zone (FAZ), vessel density (VD), and vessel length density (VLD) in non-hospitalized Neuro-PASC patients with those in healthy controls in an effort to elucidate the mechanism underlying this debilitating condition. Neuro-PASC patients with a positive SARS-CoV-2 test and neurological symptoms lasting ≥6 weeks were included. Those with prior COVID-19 hospitalization were excluded. Subjects underwent OCT-A with segmentation of the full retinal slab into the superficial (SCP) and deep (DCP) capillary plexus. The FAZ was manually delineated on the full slab in ImageJ. An ImageJ macro was used to measure VD and VLD. OCT-A variables were analyzed using linear mixed-effects models with fixed effects for Neuro-PASC, age, and sex, and a random effect for patient to account for measurements from both eyes. The coefficient of Neuro-PASC status was used to determine statistical significance; <i>p</i>-values were adjusted using the Benjamani-Hochberg procedure. Neuro-PASC patients (<i>N</i> = 30; 60 eyes) exhibited a statistically significant (<i>p</i> = 0.005) reduction in DCP VLD compared to healthy controls (<i>N</i> = 44; 80 eyes). The sole reduction in DCP VLD in Neuro-PASC may suggest preferential involvement of the smallest blood vessels.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856302/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Vision-Based Collision Warning Systems with Deep Learning: A Systematic Review.
IF 2.7
Journal of Imaging Pub Date : 2025-02-17 DOI: 10.3390/jimaging11020064
Charith Chitraranjan, Vipooshan Vipulananthan, Thuvarakan Sritharan
{"title":"Vision-Based Collision Warning Systems with Deep Learning: A Systematic Review.","authors":"Charith Chitraranjan, Vipooshan Vipulananthan, Thuvarakan Sritharan","doi":"10.3390/jimaging11020064","DOIUrl":"10.3390/jimaging11020064","url":null,"abstract":"<p><p>Timely prediction of collisions enables advanced driver assistance systems to issue warnings and initiate emergency maneuvers as needed to avoid collisions. With recent developments in computer vision and deep learning, collision warning systems that use vision as the only sensory input have emerged. They are less expensive than those that use multiple sensors, but their effectiveness must be thoroughly assessed. We systematically searched academic literature for studies proposing ego-centric, vision-based collision warning systems that use deep learning techniques. Thirty-one studies among the search results satisfied our inclusion criteria. Risk of bias was assessed with PROBAST. We reviewed the selected studies and answer three primary questions: What are the (1) deep learning techniques used and how are they used? (2) datasets and experiments used to evaluate? (3) results achieved? We identified two main categories of methods: Those that use deep learning models to directly predict the probability of a future collision from input video, and those that use deep learning models at one or more stages of a pipeline to compute a threat metric before predicting collisions. More importantly, we show that the experimental evaluation of most systems is inadequate due to either not performing quantitative experiments or various biases present in the datasets used. Lack of suitable datasets is a major challenge to the evaluation of these systems and we suggest future work to address this issue.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856197/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unraveling the Role of PET in Cervical Cancer: Review of Current Applications and Future Horizons. 揭示 PET 在宫颈癌中的作用:当前应用回顾与未来展望》。
IF 2.7
Journal of Imaging Pub Date : 2025-02-17 DOI: 10.3390/jimaging11020063
Divya Yadav, Elisabeth O'Dwyer, Matthew Agee, Silvina P Dutruel, Sonia Mahajan, Sandra Huicochea Castellanos
{"title":"Unraveling the Role of PET in Cervical Cancer: Review of Current Applications and Future Horizons.","authors":"Divya Yadav, Elisabeth O'Dwyer, Matthew Agee, Silvina P Dutruel, Sonia Mahajan, Sandra Huicochea Castellanos","doi":"10.3390/jimaging11020063","DOIUrl":"10.3390/jimaging11020063","url":null,"abstract":"<p><p>FDG PET/CT provides complementary metabolic information with greater sensitivity and specificity than conventional imaging modalities for evaluating local recurrence, nodal, and distant metastases in patients with cervical cancer. PET/CT can also be used in radiation treatment planning, which is the mainstay of treatment. With the implementation of various oncological guidelines, FDG PET/CT has been utilized more frequently in patient management and prognostication. Newer PET tracers targeting the tumor microenvironment offer valuable biologic insights to elucidate the mechanism of treatment resistance and tumor aggressiveness and identify the high-risk patients. Artificial intelligence and machine learning approaches have been utilized more recently in metastatic disease detection, response assessment, and prognostication of cervical cancer.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856187/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143494029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accurate Prostate Segmentation in Large-Scale Magnetic Resonance Imaging Datasets via First-in-First-Out Feature Memory and Multi-Scale Context Modeling. 通过先进先出特征记忆和多尺度上下文建模在大规模磁共振成像数据集中进行前列腺精确分段
IF 2.7
Journal of Imaging Pub Date : 2025-02-16 DOI: 10.3390/jimaging11020061
Jingyi Zhu, Xukun Zhang, Xiao Luo, Zhiji Zheng, Kun Zhou, Yanlan Kang, Haiqing Li, Daoying Geng
{"title":"Accurate Prostate Segmentation in Large-Scale Magnetic Resonance Imaging Datasets via First-in-First-Out Feature Memory and Multi-Scale Context Modeling.","authors":"Jingyi Zhu, Xukun Zhang, Xiao Luo, Zhiji Zheng, Kun Zhou, Yanlan Kang, Haiqing Li, Daoying Geng","doi":"10.3390/jimaging11020061","DOIUrl":"10.3390/jimaging11020061","url":null,"abstract":"<p><p>Prostate cancer, a prevalent malignancy affecting males globally, underscores the critical need for precise prostate segmentation in diagnostic imaging. However, accurate delineation via MRI still faces several challenges: (1) The distinction of the prostate from surrounding soft tissues is impeded by subtle boundaries in MRI images. (2) Regions such as the apex and base of the prostate exhibit inherent blurriness, which complicates edge extraction and precise segmentation. The objective of this study was to precisely delineate the borders of the prostate including the apex and base regions. This study introduces a multi-scale context modeling module to enhance boundary pixel representation, thus reducing the impact of irrelevant features on segmentation outcomes. Utilizing a first-in-first-out dynamic adjustment mechanism, the proposed methodology optimizes feature vector selection, thereby enhancing segmentation outcomes for challenging apex and base regions of the prostate. Segmentation of the prostate on 2175 clinically annotated MRI datasets demonstrated that our proposed MCM-UNet outperforms existing methods. The Average Symmetric Surface Distance (ASSD) and Dice similarity coefficient (DSC) for prostate segmentation were 0.58 voxels and 91.71%, respectively. The prostate segmentation results closely matched those manually delineated by experienced radiologists. Consequently, our method significantly enhances the accuracy of prostate segmentation and holds substantial significance in the diagnosis and treatment of prostate cancer.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856738/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493848","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating Eye Movements to Examine Attachment-Related Differences in Facial Emotion Perception and Face Memory.
IF 2.7
Journal of Imaging Pub Date : 2025-02-16 DOI: 10.3390/jimaging11020060
Karolin Török-Suri, Kornél Németh, Máté Baradits, Gábor Csukly
{"title":"Investigating Eye Movements to Examine Attachment-Related Differences in Facial Emotion Perception and Face Memory.","authors":"Karolin Török-Suri, Kornél Németh, Máté Baradits, Gábor Csukly","doi":"10.3390/jimaging11020060","DOIUrl":"10.3390/jimaging11020060","url":null,"abstract":"<p><p>Individual differences in attachment orientations may influence how we process emotionally significant stimuli. As one of the most important sources of emotional information are facial expressions, we examined whether there is an association between adult attachment styles (i.e., scores on the ECR questionnaire, which measures the avoidance and anxiety dimensions of attachment), facial emotion perception and face memory in a neurotypical sample. Trait and state anxiety were also measured as covariates. Eye-tracking was used during the emotion decision task (happy vs. sad faces) and the subsequent facial recognition task; the length of fixations to different face regions was measured as the dependent variable. Linear mixed models suggested that differences during emotion perception may result from longer fixations in individuals with insecure (anxious or avoidant) attachment orientations. This effect was also influenced by individual state and trait anxiety measures. Eye movements during the recognition memory task, however, were not related to either of the attachment dimensions; only trait anxiety had a significant effect on the length of fixations in this condition. The results of our research may contribute to a more accurate understanding of facial emotion perception in the light of attachment styles, and their interaction with anxiety characteristics.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856241/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applications of Artificial Intelligence, Deep Learning, and Machine Learning to Support the Analysis of Microscopic Images of Cells and Tissues. 人工智能、深度学习和机器学习在支持细胞和组织显微图像分析中的应用。
IF 2.7
Journal of Imaging Pub Date : 2025-02-15 DOI: 10.3390/jimaging11020059
Muhammad Ali, Viviana Benfante, Ghazal Basirinia, Pierpaolo Alongi, Alessandro Sperandeo, Alberto Quattrocchi, Antonino Giulio Giannone, Daniela Cabibi, Anthony Yezzi, Domenico Di Raimondo, Antonino Tuttolomondo, Albert Comelli
{"title":"Applications of Artificial Intelligence, Deep Learning, and Machine Learning to Support the Analysis of Microscopic Images of Cells and Tissues.","authors":"Muhammad Ali, Viviana Benfante, Ghazal Basirinia, Pierpaolo Alongi, Alessandro Sperandeo, Alberto Quattrocchi, Antonino Giulio Giannone, Daniela Cabibi, Anthony Yezzi, Domenico Di Raimondo, Antonino Tuttolomondo, Albert Comelli","doi":"10.3390/jimaging11020059","DOIUrl":"10.3390/jimaging11020059","url":null,"abstract":"<p><p>Artificial intelligence (AI) transforms image data analysis across many biomedical fields, such as cell biology, radiology, pathology, cancer biology, and immunology, with object detection, image feature extraction, classification, and segmentation applications. Advancements in deep learning (DL) research have been a critical factor in advancing computer techniques for biomedical image analysis and data mining. A significant improvement in the accuracy of cell detection and segmentation algorithms has been achieved as a result of the emergence of open-source software and innovative deep neural network architectures. Automated cell segmentation now enables the extraction of quantifiable cellular and spatial features from microscope images of cells and tissues, providing critical insights into cellular organization in various diseases. This review aims to examine the latest AI and DL techniques for cell analysis and data mining in microscopy images, aid the biologists who have less background knowledge in AI and machine learning (ML), and incorporate the ML models into microscopy focus images.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856378/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Estimation of Trabecular Bone Volume with Dual-Echo Ultrashort Echo Time (UTE) Magnetic Resonance Imaging (MRI) Significantly Correlates with High-Resolution Computed Tomography (CT).
IF 2.7
Journal of Imaging Pub Date : 2025-02-13 DOI: 10.3390/jimaging11020057
Karen Y Cheng, Dina Moazamian, Behnam Namiranian, Hamidreza Shaterian Mohammadi, Salem Alenezi, Christine B Chung, Saeed Jerban
{"title":"Estimation of Trabecular Bone Volume with Dual-Echo Ultrashort Echo Time (UTE) Magnetic Resonance Imaging (MRI) Significantly Correlates with High-Resolution Computed Tomography (CT).","authors":"Karen Y Cheng, Dina Moazamian, Behnam Namiranian, Hamidreza Shaterian Mohammadi, Salem Alenezi, Christine B Chung, Saeed Jerban","doi":"10.3390/jimaging11020057","DOIUrl":"10.3390/jimaging11020057","url":null,"abstract":"<p><p>Trabecular bone architecture has important implications for the mechanical strength of bone. Trabecular elements appear as signal void when imaged utilizing conventional magnetic resonance imaging (MRI) sequences. Ultrashort echo time (UTE) MRI can acquire high signal from trabecular bone, allowing for quantitative evaluation. However, the trabecular morphology is often disturbed in UTE-MRI due to chemical shift artifacts caused by the presence of fat in marrow. This study aimed to evaluate a UTE-MRI technique to estimate the trabecular bone volume fraction (BVTV) without requiring trabecular-level morphological assessment. A total of six cadaveric distal tibial diaphyseal trabecular bone cubes were scanned using a dual-echo UTE Cones sequence (TE = 0.03 and 2.2 ms) on a clinical 3T MRI scanner and on a micro-computed tomography (μCT) scanner. The BVTV was calculated from 10 consecutive slices on both the MR and μCT images. BVTV calculated from the MR images showed strongly significant correlation with the BVTV determined from μCT images (R = 0.84, <i>p</i> < 0.01), suggesting that UTE-MRI is a feasible technique for the assessment of trabecular bone microarchitecture. This would allow for the non-invasive assessment of information regarding bone strength, and UTE-MRI may potentially serve as a novel tool for assessment of fracture risk.</p>","PeriodicalId":37035,"journal":{"name":"Journal of Imaging","volume":"11 2","pages":""},"PeriodicalIF":2.7,"publicationDate":"2025-02-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11856473/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143493948","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信