International Journal of Biomedical Imaging最新文献

筛选
英文 中文
X-Ray-Based 3D Histopathology of the Kidney Using Cryogenic Contrast-Enhanced MicroCT 利用低温对比增强显微计算机断层扫描进行基于 X 射线的肾脏三维组织病理学检查
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2024-04-09 DOI: 10.1155/2024/3924036
Arne Maes, Onno Borgel, Clara Braconnier, Tim Balcaen, Martine Wevers, Rebecca Halbgebauer, Markus Huber-Lang, G. Kerckhofs
{"title":"X-Ray-Based 3D Histopathology of the Kidney Using Cryogenic Contrast-Enhanced MicroCT","authors":"Arne Maes, Onno Borgel, Clara Braconnier, Tim Balcaen, Martine Wevers, Rebecca Halbgebauer, Markus Huber-Lang, G. Kerckhofs","doi":"10.1155/2024/3924036","DOIUrl":"https://doi.org/10.1155/2024/3924036","url":null,"abstract":"The kidney's microstructure, which comprises a highly convoluted tubular and vascular network, can only be partially revealed using classical 2D histology. Considering that the kidney's microstructure is closely related to its function and is often affected by pathologies, there is a need for powerful and high-resolution 3D imaging techniques to visualize the microstructure. Here, we present how cryogenic contrast-enhanced microCT (cryo-CECT) allowed 3D visualization of glomeruli, tubuli, and vasculature. By comparing different contrast-enhancing staining agents and freezing protocols, we found that the preferred sample preparation protocol was the combination of staining with 1:2 hafnium(IV)-substituted Wells-Dawson polyoxometalate and freezing by submersion in isopentane at −78°C. This optimized protocol showed to be highly sensitive, allowing to detect small pathology-induced microstructural changes in a mouse model of mild trauma-related acute kidney injury after thorax trauma and hemorrhagic shock. In summary, we demonstrated that cryo-CECT is an effective 3D histopathological tool that allows to enhance our understanding of kidney tissue microstructure and their related function.","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-04-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140724648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhanced Myocardial Tissue Visualization: A Comparative Cardiovascular Magnetic Resonance Study of Gradient-Spin Echo-STIR and Conventional STIR Imaging. 增强心肌组织可视化:梯度旋转 Echo-STIR 和传统 STIR 成像的心血管磁共振对比研究。
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2024-04-01 eCollection Date: 2024-01-01 DOI: 10.1155/2024/8456669
Sadegh Dehghani, Shapoor Shirani, Elahe Jazayeri Gharebagh
{"title":"Enhanced Myocardial Tissue Visualization: A Comparative Cardiovascular Magnetic Resonance Study of Gradient-Spin Echo-STIR and Conventional STIR Imaging.","authors":"Sadegh Dehghani, Shapoor Shirani, Elahe Jazayeri Gharebagh","doi":"10.1155/2024/8456669","DOIUrl":"https://doi.org/10.1155/2024/8456669","url":null,"abstract":"<p><strong>Purpose: </strong>This study is aimed at evaluating the efficacy of the gradient-spin echo- (GraSE-) based short tau inversion recovery (STIR) sequence (GraSE-STIR) in cardiovascular magnetic resonance (CMR) imaging compared to the conventional turbo spin echo- (TSE-) based STIR sequence, specifically focusing on image quality, specific absorption rate (SAR), and image acquisition time.</p><p><strong>Methods: </strong>In a prospective study, we examined forty-four normal volunteers and seventeen patients referred for CMR imaging using conventional STIR and GraSE-STIR techniques. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), image quality, <i>T</i><sub>2</sub> signal intensity (SI) ratio, SAR, and image acquisition time were compared between both sequences.</p><p><strong>Results: </strong>GraSE-STIR showed significant improvements in image quality (4.15 ± 0.8 vs. 3.34 ± 0.9, <i>p</i> = 0.024) and cardiac motion artifact reduction (7 vs. 18 out of 53, <i>p</i> = 0.038) compared to conventional STIR. Furthermore, the acquisition time (27.17 ± 3.53 vs. 36.9 ± 4.08 seconds, <i>p</i> = 0.041) and the local torso SAR (<13% vs. <17%, <i>p</i> = 0.047) were significantly lower for GraSE-STIR compared to conventional STIR in short-axis plan. However, no significant differences were shown in <i>T</i><sub>2</sub> SI ratio (<i>p</i> = 0.141), SNR (<i>p</i> = 0.093), CNR (<i>p</i> = 0.068), and SAR (<i>p</i> = 0.071) between these two sequences.</p><p><strong>Conclusions: </strong>GraSE-STIR offers notable advantages over conventional STIR sequence, with improved image quality, reduced motion artifacts, and shorter acquisition times. These findings highlight the potential of GraSE-STIR as a valuable technique for routine clinical CMR imaging.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11001468/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140871905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model. 利用弱监督深度学习模型检测核磁共振成像看不见的前列腺癌
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2024-03-19 eCollection Date: 2024-01-01 DOI: 10.1155/2024/2741986
Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu
{"title":"Detecting MRI-Invisible Prostate Cancers Using a Weakly Supervised Deep Learning Model.","authors":"Yao Zheng, Jingliang Zhang, Dong Huang, Xiaoshuo Hao, Weijun Qin, Yang Liu","doi":"10.1155/2024/2741986","DOIUrl":"10.1155/2024/2741986","url":null,"abstract":"<p><strong>Background: </strong>MRI is an important tool for accurate detection and targeted biopsy of prostate lesions. However, the imaging appearances of some prostate cancers are similar to those of the surrounding normal tissue on MRI, which are referred to as MRI-invisible prostate cancers (MIPCas). The detection of MIPCas remains challenging and requires extensive systematic biopsy for identification. In this study, we developed a weakly supervised UNet (WSUNet) to detect MIPCas.</p><p><strong>Methods: </strong>The study included 777 patients (training set: 600; testing set: 177), all of them underwent comprehensive prostate biopsies using an MRI-ultrasound fusion system. MIPCas were identified in MRI based on the Gleason grade (≥7) from known systematic biopsy results.</p><p><strong>Results: </strong>The WSUNet model underwent validation through systematic biopsy in the testing set with an AUC of 0.764 (95% CI: 0.728-0.798). Furthermore, WSUNet exhibited a statistically significant precision improvement of 91.3% (<i>p</i> < 0.01) over conventional systematic biopsy methods in the testing set. This improvement resulted in a substantial 47.6% (<i>p</i> < 0.01) decrease in unnecessary biopsy needles, while maintaining the same number of positively identified cores as in the original systematic biopsy.</p><p><strong>Conclusions: </strong>In conclusion, the proposed WSUNet could effectively detect MIPCas, thereby reducing unnecessary biopsies.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10965281/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140294947","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Empowering Radiographers: A Call for Integrated AI Training in University Curricula. 增强放射技师的能力:呼吁在大学课程中纳入人工智能培训。
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2024-03-08 eCollection Date: 2024-01-01 DOI: 10.1155/2024/7001343
Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade
{"title":"Empowering Radiographers: A Call for Integrated AI Training in University Curricula.","authors":"Mohammad A Rawashdeh, Sara Almazrouei, Maha Zaitoun, Praveen Kumar, Charbel Saade","doi":"10.1155/2024/7001343","DOIUrl":"10.1155/2024/7001343","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) applications are rapidly advancing in the field of medical imaging. This study is aimed at investigating the perception and knowledge of radiographers towards artificial intelligence.</p><p><strong>Methods: </strong>An online survey employing Google Forms consisting of 20 questions regarding the radiographers' perception of AI. The questionnaire was divided into two parts. The first part consisted of demographic information as well as whether the participants think AI should be part of medical training, their previous knowledge of the technologies used in AI, and whether they prefer to receive training on AI. The second part of the questionnaire consisted of two fields. The first one consisted of 16 questions regarding radiographers' perception of AI applications in radiology. Descriptive analysis and logistic regression analysis were used to evaluate the effect of gender on the items of the questionnaire.</p><p><strong>Results: </strong>Familiarity with AI was low, with only 52 out of 100 respondents (52%) reporting good familiarity with AI. Many participants considered AI useful in the medical field (74%). The findings of the study demonstrate that nearly most of the participants (98%) believed that AI should be integrated into university education, with 87% of the respondents preferring to receive training on AI, with some already having prior knowledge of AI used in technologies. The logistic regression analysis indicated a significant association between male gender and experience within the range of 23-27 years with the degree of familiarity with AI technology, exhibiting respective odds ratios of 1.89 (COR = 1.89) and 1.87 (COR = 1.87).</p><p><strong>Conclusions: </strong>This study suggests that medical practices have a favorable attitude towards AI in the radiology field. Most participants surveyed believed that AI should be part of radiography education. AI training programs for undergraduate and postgraduate radiographers may be necessary to prepare them for AI tools in radiology development.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10942819/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140144318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment. 将结构光照图像重构代码便捷地转换和优化到 GPU 环境中。
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2024-02-28 eCollection Date: 2024-01-01 DOI: 10.1155/2024/8862387
Kwangsung Oh, Piero R Bianco
{"title":"Facile Conversion and Optimization of Structured Illumination Image Reconstruction Code into the GPU Environment.","authors":"Kwangsung Oh, Piero R Bianco","doi":"10.1155/2024/8862387","DOIUrl":"10.1155/2024/8862387","url":null,"abstract":"<p><p>Superresolution, structured illumination microscopy (SIM) is an ideal modality for imaging live cells due to its relatively high speed and low photon-induced damage to the cells. The rate-limiting step in observing a superresolution image in SIM is often the reconstruction speed of the algorithm used to form a single image from as many as nine raw images. Reconstruction algorithms impose a significant computing burden due to an intricate workflow and a large number of often complex calculations to produce the final image. Further adding to the computing burden is that the code, even within the MATLAB environment, can be inefficiently written by microscopists who are noncomputer science researchers. In addition, they do not take into consideration the processing power of the graphics processing unit (GPU) of the computer. To address these issues, we present simple but efficient approaches to first revise MATLAB code, followed by conversion to GPU-optimized code. When combined with cost-effective, high-performance GPU-enabled computers, a 4- to 500-fold improvement in algorithm execution speed is observed as shown for the image denoising Hessian-SIM algorithm. Importantly, the improved algorithm produces images identical in quality to the original.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10917484/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140050652","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
White Matter Fiber Tracking Method with Adaptive Correction of Tracking Direction. 自适应修正追踪方向的白质纤维追踪法
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2024-02-05 eCollection Date: 2024-01-01 DOI: 10.1155/2024/4102461
Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu
{"title":"White Matter Fiber Tracking Method with Adaptive Correction of Tracking Direction.","authors":"Qian Zheng, Kefu Guo, Yinghui Meng, Jiaofen Nan, Lin Xu","doi":"10.1155/2024/4102461","DOIUrl":"10.1155/2024/4102461","url":null,"abstract":"<p><strong>Background: </strong>The deterministic fiber tracking method has the advantage of high computational efficiency and good repeatability, making it suitable for the noninvasive estimation of brain structural connectivity in clinical fields. To address the issue of the current classical deterministic method tending to deviate in the tracking direction in the region of crossing fiber region, in this paper, we propose an adaptive correction-based deterministic white matter fiber tracking method, named FTACTD.</p><p><strong>Methods: </strong>The proposed FTACTD method can accurately track white matter fibers by adaptively adjusting the deflection direction strategy based on the tensor matrix and the input fiber direction of adjacent voxels. The degree of correction direction changes adaptively according to the shape of the diffusion tensor, mimicking the actual tracking deflection angle and direction. Furthermore, both forward and reverse tracking techniques are employed to track the entire fiber. The effectiveness of the proposed method is validated and quantified using both simulated and real brain datasets. Various indicators such as invalid bundles (IB), valid bundles (VB), invalid connections (IC), no connections (NC), and valid connections (VC) are utilized to assess the performance of the proposed method on simulated data and real diffusion-weighted imaging (DWI) data.</p><p><strong>Results: </strong>The experimental results of the simulated data show that the FTACTD method tracks outperform existing methods, achieving the highest number of VB with a total of 13 bundles. Additionally, it identifies the least number of incorrect fiber bundles, with only 32 bundles identified as wrong. Compared to the FACT method, the FTACTD method reduces the number of NC by 36.38%. In terms of VC, the FTACTD method surpasses even the best performing SD_Stream method among deterministic methods by 1.64%. Extensive in vivo experiments demonstrate the superiority of the proposed method in terms of tracking more accurate and complete fiber paths, resulting in improved continuity.</p><p><strong>Conclusion: </strong>The FTACTD method proposed in this study indicates superior tracking results and provides a methodological basis for the investigating, diagnosis, and treatment of brain disorders associated with white matter fiber deficits and abnormalities.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-02-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10861278/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System. 基于皮肤镜的无创数字系统中使用视觉变换器自动分析的皮肤癌分割和分类。
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2024-02-03 eCollection Date: 2024-01-01 DOI: 10.1155/2024/3022192
Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder
{"title":"Skin Cancer Segmentation and Classification Using Vision Transformer for Automatic Analysis in Dermatoscopy-Based Noninvasive Digital System.","authors":"Galib Muhammad Shahriar Himel, Md Masudul Islam, Kh Abdullah Al-Aff, Shams Ibne Karim, Md Kabir Uddin Sikder","doi":"10.1155/2024/3022192","DOIUrl":"https://doi.org/10.1155/2024/3022192","url":null,"abstract":"<p><p>Skin cancer is a significant health concern worldwide, and early and accurate diagnosis plays a crucial role in improving patient outcomes. In recent years, deep learning models have shown remarkable success in various computer vision tasks, including image classification. In this research study, we introduce an approach for skin cancer classification using vision transformer, a state-of-the-art deep learning architecture that has demonstrated exceptional performance in diverse image analysis tasks. The study utilizes the HAM10000 dataset; a publicly available dataset comprising 10,015 skin lesion images classified into two categories: benign (6705 images) and malignant (3310 images). This dataset consists of high-resolution images captured using dermatoscopes and carefully annotated by expert dermatologists. Preprocessing techniques, such as normalization and augmentation, are applied to enhance the robustness and generalization of the model. The vision transformer architecture is adapted to the skin cancer classification task. The model leverages the self-attention mechanism to capture intricate spatial dependencies and long-range dependencies within the images, enabling it to effectively learn relevant features for accurate classification. Segment Anything Model (SAM) is employed to segment the cancerous areas from the images; achieving an IOU of 96.01% and Dice coefficient of 98.14% and then various pretrained models are used for classification using vision transformer architecture. Extensive experiments and evaluations are conducted to assess the performance of our approach. The results demonstrate the superiority of the vision transformer model over traditional deep learning architectures in skin cancer classification in general with some exceptions. Upon experimenting on six different models, ViT-Google, ViT-MAE, ViT-ResNet50, ViT-VAN, ViT-BEiT, and ViT-DiT, we found out that the ML approach achieves 96.15% accuracy using Google's ViT patch-32 model with a low false negative ratio on the test dataset, showcasing its potential as an effective tool for aiding dermatologists in the diagnosis of skin cancer.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2024-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10858797/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139724433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Segmentation of Dynamic Total-Body [18F]-FDG PET Images Using Unsupervised Clustering. 使用无监督聚类对动态全身 [18F]-FDG PET 图像进行分割。
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2023-12-05 eCollection Date: 2023-01-01 DOI: 10.1155/2023/3819587
Maria K Jaakkola, Maria Rantala, Anna Jalo, Teemu Saari, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén
{"title":"Segmentation of Dynamic Total-Body [<sup>18</sup>F]-FDG PET Images Using Unsupervised Clustering.","authors":"Maria K Jaakkola, Maria Rantala, Anna Jalo, Teemu Saari, Jaakko Hentilä, Jatta S Helin, Tuuli A Nissinen, Olli Eskola, Johan Rajander, Kirsi A Virtanen, Jarna C Hannukainen, Francisco López-Picón, Riku Klén","doi":"10.1155/2023/3819587","DOIUrl":"https://doi.org/10.1155/2023/3819587","url":null,"abstract":"<p><p>Clustering time activity curves of PET images have been used to separate clinically relevant areas of the brain or tumours. However, PET image segmentation in multiorgan level is much less studied due to the available total-body data being limited to animal studies. Now, the new PET scanners providing the opportunity to acquire total-body PET scans also from humans are becoming more common, which opens plenty of new clinically interesting opportunities. Therefore, organ-level segmentation of PET images has important applications, yet it lacks sufficient research. In this proof of concept study, we evaluate if the previously used segmentation approaches are suitable for segmenting dynamic human total-body PET images in organ level. Our focus is on general-purpose unsupervised methods that are independent of external data and can be used for all tracers, organisms, and health conditions. Additional anatomical image modalities, such as CT or MRI, are not used, but the segmentation is done purely based on the dynamic PET images. The tested methods are commonly used building blocks of the more sophisticated methods rather than final methods as such, and our goal is to evaluate if these basic tools are suited for the arising human total-body PET image segmentation. First, we excluded methods that were computationally too demanding for the large datasets from human total-body PET scanners. These criteria filtered out most of the commonly used approaches, leaving only two clustering methods, <i>k</i>-means and Gaussian mixture model (GMM), for further analyses. We combined <i>k</i>-means with two different preprocessing approaches, namely, principal component analysis (PCA) and independent component analysis (ICA). Then, we selected a suitable number of clusters using 10 images. Finally, we tested how well the usable approaches segment the remaining PET images in organ level, highlight the best approaches together with their limitations, and discuss how further research could tackle the observed shortcomings. In this study, we utilised 40 total-body [<sup>18</sup>F] fluorodeoxyglucose PET images of rats to mimic the coming large human PET images and a few actual human total-body images to ensure that our conclusions from the rat data generalise to the human data. Our results show that ICA combined with <i>k</i>-means has weaker performance than the other two computationally usable approaches and that certain organs are easier to segment than others. While GMM performed sufficiently, it was by far the slowest one among the tested approaches, making <i>k</i>-means combined with PCA the most promising candidate for further development. However, even with the best methods, the mean Jaccard index was slightly below 0.5 for the easiest tested organ and below 0.2 for the most challenging organ. Thus, we conclude that there is a lack of accurate and computationally light general-purpose segmentation method that can analyse dynamic total-body PET images.</p","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2023-12-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10715853/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138804116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Detection of AMD and DME Retinal Pathologies Using Deep Learning. 利用深度学习自动检测AMD和DME视网膜病变。
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2023-11-24 eCollection Date: 2023-01-01 DOI: 10.1155/2023/9966107
Latifa Saidi, Hajer Jomaa, Haddad Zainab, Hsouna Zgolli, Sonia Mabrouk, Désiré Sidibé, Hedi Tabia, Nawres Khlifa
{"title":"Automatic Detection of AMD and DME Retinal Pathologies Using Deep Learning.","authors":"Latifa Saidi, Hajer Jomaa, Haddad Zainab, Hsouna Zgolli, Sonia Mabrouk, Désiré Sidibé, Hedi Tabia, Nawres Khlifa","doi":"10.1155/2023/9966107","DOIUrl":"10.1155/2023/9966107","url":null,"abstract":"<p><p>Diabetic macular edema (DME) and age-related macular degeneration (AMD) are two common eye diseases. They are often undiagnosed or diagnosed late. This can result in permanent and irreversible vision loss. Therefore, early detection and treatment of these diseases can prevent vision loss, save money, and provide a better quality of life for individuals. Optical coherence tomography (OCT) imaging is widely applied to identify eye diseases, including DME and AMD. In this work, we developed automatic deep learning-based methods to detect these pathologies using SD-OCT scans. The convolutional neural network (CNN) from scratch we developed gave the best classification score with an accuracy higher than 99% on Duke dataset of OCT images.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2023-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10691890/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138478963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessment of the Impact of Turbo Factor on Image Quality and Tissue Volumetrics in Brain Magnetic Resonance Imaging Using the Three-Dimensional T1-Weighted (3D T1W) Sequence. 利用三维t1加权(3D T1W)序列评估Turbo因子对脑磁共振成像图像质量和组织体积的影响。
IF 7.6
International Journal of Biomedical Imaging Pub Date : 2023-11-15 eCollection Date: 2023-01-01 DOI: 10.1155/2023/6304219
Eric Naab Manson, Stephen Inkoom, Abdul Nashirudeen Mumuni, Issahaku Shirazu, Adolf Kofi Awua
{"title":"Assessment of the Impact of Turbo Factor on Image Quality and Tissue Volumetrics in Brain Magnetic Resonance Imaging Using the Three-Dimensional T1-Weighted (3D T1W) Sequence.","authors":"Eric Naab Manson, Stephen Inkoom, Abdul Nashirudeen Mumuni, Issahaku Shirazu, Adolf Kofi Awua","doi":"10.1155/2023/6304219","DOIUrl":"https://doi.org/10.1155/2023/6304219","url":null,"abstract":"<p><strong>Background: </strong>The 3D T1W turbo field echo sequence is a standard imaging method for acquiring high-contrast images of the brain. However, the contrast-to-noise ratio (CNR) can be affected by the turbo factor, which could affect the delineation and segmentation of various structures in the brain and may consequently lead to misdiagnosis. This study is aimed at evaluating the effect of the turbo factor on image quality and volumetric measurement reproducibility in brain magnetic resonance imaging (MRI).</p><p><strong>Methods: </strong>Brain images of five healthy volunteers with no history of neurological diseases were acquired on a 1.5 T MRI scanner with varying turbo factors of 50, 100, 150, 200, and 225. The images were processed and analyzed with FreeSurfer. The influence of the TFE factor on image quality and reproducibility of brain volume measurements was investigated. Image quality metrics assessed included the signal-to-noise ratio (SNR) of white matter (WM), CNR between gray matter/white matter (GM/WM) and gray matter/cerebrospinal fluid (GM/CSF), and Euler number (EN). Moreover, structural brain volume measurements of WM, GM, and CSF were conducted.</p><p><strong>Results: </strong>Turbo factor 200 produced the best SNR (median = 17.01) and GM/WM CNR (median = 2.29), but turbo factor 100 offered the most reproducible SNR (IQR = 2.72) and GM/WM CNR (IQR = 0.14). Turbo factor 50 had the worst and the least reproducible SNR, whereas turbo factor 225 had the worst and the least reproducible GM/WM CNR. Turbo factor 200 again had the best GM/CSF CNR but offered the least reproducible GM/CSF CNR. Turbo factor 225 had the best performance on EN (-21), while turbo factor 200 was next to the most reproducible turbo factor on EN (11). The results showed that turbo factor 200 had the least data acquisition time, in addition to superior performance on SNR, GM/WM CNR, GM/CSF CNR, and good reproducibility characteristics on EN. Both image quality metrics and volumetric measurements did not vary significantly (<i>p</i> > 0.05) with the range of turbo factors used in the study by one-way ANOVA analysis.</p><p><strong>Conclusion: </strong>Since no significant differences were observed in the performance of the turbo factors in terms of image quality and volume of brain structure, turbo factor 200 with a 74% acquisition time reduction was found to be optimal for brain MR imaging at 1.5 T.</p>","PeriodicalId":47063,"journal":{"name":"International Journal of Biomedical Imaging","volume":null,"pages":null},"PeriodicalIF":7.6,"publicationDate":"2023-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10665095/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138463553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信