Journal of Digital Imaging最新文献

筛选
英文 中文
Exploring the Low-Dose Limit for Focal Hepatic Lesion Detection with a Deep Learning-Based CT Reconstruction Algorithm: A Simulation Study on Patient Images 利用基于深度学习的 CT 重建算法探索肝脏病灶检测的低剂量极限:患者图像模拟研究
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-03-19 DOI: 10.1007/s10278-024-01080-3
Yongchun You, Sihua Zhong, Guozhi Zhang, Yuting Wen, Dian Guo, Wanjiang Li, Zhenlin Li
{"title":"Exploring the Low-Dose Limit for Focal Hepatic Lesion Detection with a Deep Learning-Based CT Reconstruction Algorithm: A Simulation Study on Patient Images","authors":"Yongchun You, Sihua Zhong, Guozhi Zhang, Yuting Wen, Dian Guo, Wanjiang Li, Zhenlin Li","doi":"10.1007/s10278-024-01080-3","DOIUrl":"https://doi.org/10.1007/s10278-024-01080-3","url":null,"abstract":"<p>This study aims to investigate the maximum achievable dose reduction for applying a new deep learning-based reconstruction algorithm, namely the artificial intelligence iterative reconstruction (AIIR), in computed tomography (CT) for hepatic lesion detection. A total of 40 patients with 98 clinically confirmed hepatic lesions were retrospectively included. The mean volume CT dose index was 13.66 ± 1.73 mGy in routine-dose portal venous CT examinations, where the images were originally obtained with hybrid iterative reconstruction (HIR). Low-dose simulations were performed in projection domain for 40%-, 20%-, and 10%-dose levels, followed by reconstruction using both HIR and AIIR. Two radiologists were asked to detect hepatic lesion on each set of low-dose image in separate sessions. Qualitative metrics including lesion conspicuity, diagnostic confidence, and overall image quality were evaluated using a 5-point scale. The contrast-to-noise ratio (CNR) for lesion was also calculated for quantitative assessment. The lesion CNR on AIIR at reduced doses were significantly higher than that on routine-dose HIR (all <i>p</i> &lt; 0.05). Lower qualitative image quality was observed as the radiation dose reduced, while there were no significant differences between 40%-dose AIIR and routine-dose HIR images. The lesion detection rate was 100%, 98% (96/98), and 73.5% (72/98) on 40%-, 20%-, and 10%-dose AIIR, respectively, whereas it was 98% (96/98), 73.5% (72/98), and 40% (39/98) on the corresponding low-dose HIR, respectively. AIIR outperformed HIR in simulated low-dose CT examinations of the liver. The use of AIIR allows up to 60% dose reduction for lesion detection while maintaining comparable image quality to routine-dose HIR.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"46 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140168349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-segmentation of Adult-Type Diffuse Gliomas: Comparison of Transfer Learning-Based Convolutional Neural Network Model vs. Radiologists 成人型弥漫性胶质瘤的自动分割:基于迁移学习的卷积神经网络模型与放射医师的比较
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-02-21 DOI: 10.1007/s10278-024-01044-7
{"title":"Auto-segmentation of Adult-Type Diffuse Gliomas: Comparison of Transfer Learning-Based Convolutional Neural Network Model vs. Radiologists","authors":"","doi":"10.1007/s10278-024-01044-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01044-7","url":null,"abstract":"<h3>Abstract</h3> <p>Segmentation of glioma is crucial for quantitative brain tumor assessment, to guide therapeutic research and clinical management, but very time-consuming. Fully automated tools for the segmentation of multi-sequence MRI are needed. We developed and pretrained a deep learning (DL) model using publicly available datasets A (<em>n</em> = 210) and B (<em>n</em> = 369) containing FLAIR, T2WI, and contrast-enhanced (CE)-T1WI. This was then fine-tuned with our institutional dataset (<em>n</em> = 197) containing ADC, T2WI, and CE-T1WI, manually annotated by radiologists, and split into training (<em>n</em> = 100) and testing (<em>n</em> = 97) sets. The Dice similarity coefficient (DSC) was used to compare model outputs and manual labels. A third independent radiologist assessed segmentation quality on a semi-quantitative 5-scale score. Differences in DSC between new and recurrent gliomas, and between uni or multifocal gliomas were analyzed using the Mann–Whitney test. Semi-quantitative analyses were compared using the chi-square test. We found that there was good agreement between segmentations from the fine-tuned DL model and ground truth manual segmentations (median DSC: 0.729, std-dev: 0.134). DSC was higher for newly diagnosed (0.807) than recurrent (0.698) (<em>p</em> &lt; 0.001), and higher for unifocal (0.747) than multi-focal (0.613) cases (<em>p</em> = 0.001). Semi-quantitative scores of DL and manual segmentation were not significantly different (mean: 3.567 vs. 3.639; 93.8% vs. 97.9% scoring ≥ 3, <em>p</em> = 0.107). In conclusion, the proposed transfer learning DL performed similarly to human radiologists in glioma segmentation on both structural and ADC sequences. Further improvement in segmenting challenging postoperative and multifocal glioma cases is needed.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"72 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Developing a Radiomics Atlas Dataset of normal Abdominal and Pelvic computed Tomography (RADAPT) 开发正常腹部和盆腔计算机断层扫描放射组学图集数据集 (RADAPT)
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-02-21 DOI: 10.1007/s10278-024-01028-7
{"title":"Developing a Radiomics Atlas Dataset of normal Abdominal and Pelvic computed Tomography (RADAPT)","authors":"","doi":"10.1007/s10278-024-01028-7","DOIUrl":"https://doi.org/10.1007/s10278-024-01028-7","url":null,"abstract":"<h3>Abstract</h3> <p>Atlases of normal genomics, transcriptomics, proteomics, and metabolomics have been published in an attempt to understand the biological phenotype in health and disease and to set the basis of comprehensive comparative omics studies. No such atlas exists for radiomics data. The purpose of this study was to systematically create a radiomics dataset of normal abdominal and pelvic radiomics that can be used for model development and validation. Young adults without any previously known disease, aged &gt; 17 and ≤ 36 years old, were retrospectively included. All patients had undergone CT scanning for emergency indications. In case abnormal findings were identified, the relevant anatomical structures were excluded. Deep learning was used to automatically segment the majority of visible anatomical structures with the TotalSegmentator model as applied in 3DSlicer. Radiomics features including first order, texture, wavelet, and Laplacian of Gaussian transformed features were extracted with PyRadiomics. A Github repository was created to host the resulting dataset. Radiomics data were extracted from a total of 531 patients with a mean age of 26.8 ± 5.19 years, including 250 female and 281 male patients. A maximum of 53 anatomical structures were segmented and used for subsequent radiomics data extraction. Radiomics features were derived from a total of 526 non-contrast and 400 contrast-enhanced (portal venous) series. The dataset is publicly available for model development and validation purposes.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"2 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139919002","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Tracking of Hyoid Bone Displacement and Rotation Relative to Cervical Vertebrae in Videofluoroscopic Swallow Studies Using Deep Learning 利用深度学习自动跟踪视频荧光屏吞咽研究中相对于颈椎的舌骨位移和旋转
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-02-21 DOI: 10.1007/s10278-024-01039-4
Wuqi Li, Shitong Mao, Amanda S. Mahoney, James L. Coyle, Ervin Sejdić
{"title":"Automatic Tracking of Hyoid Bone Displacement and Rotation Relative to Cervical Vertebrae in Videofluoroscopic Swallow Studies Using Deep Learning","authors":"Wuqi Li, Shitong Mao, Amanda S. Mahoney, James L. Coyle, Ervin Sejdić","doi":"10.1007/s10278-024-01039-4","DOIUrl":"https://doi.org/10.1007/s10278-024-01039-4","url":null,"abstract":"<p>The hyoid bone displacement and rotation are critical kinematic events of the swallowing process in the assessment of videofluoroscopic swallow studies (VFSS). However, the quantitative analysis of such events requires frame-by-frame manual annotation, which is labor-intensive and time-consuming. Our work aims to develop a method of automatically tracking hyoid bone displacement and rotation in VFSS. We proposed a full high-resolution network, a deep learning architecture, to detect the anterior and posterior of the hyoid bone to identify its location and rotation. Meanwhile, the anterior-inferior corners of the C2 and C4 vertebrae were detected simultaneously to automatically establish a new coordinate system and eliminate the effect of posture change. The proposed model was developed by 59,468 VFSS frames collected from 1488 swallowing samples, and it achieved an average landmark localization error of 2.38 pixels (around 0.5% of the image with 448 × 448 pixels) and an average angle prediction error of 0.065 radians in predicting C2–C4 and hyoid bone angles. In addition, the displacement of the hyoid bone center was automatically tracked on a frame-by-frame analysis, achieving an average mean absolute error of 2.22 pixels and 2.78 pixels in the <i>x</i>-axis and <i>y</i>-axis, respectively. The results of this study support the effectiveness and accuracy of the proposed method in detecting hyoid bone displacement and rotation. Our study provided an automatic method of analyzing hyoid bone kinematics during VFSS, which could contribute to early diagnosis and effective disease management.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"5 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139918916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Impression Generation for PET Reports Using Large Language Models 利用大型语言模型为 PET 报告生成个性化印象
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-02-02 DOI: 10.1007/s10278-024-00985-3
Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara M. Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw
{"title":"Personalized Impression Generation for PET Reports Using Large Language Models","authors":"Xin Tie, Muheon Shin, Ali Pirasteh, Nevein Ibrahim, Zachary Huemann, Sharon M. Castellino, Kara M. Kelly, John Garrett, Junjie Hu, Steve Y. Cho, Tyler J. Bradshaw","doi":"10.1007/s10278-024-00985-3","DOIUrl":"https://doi.org/10.1007/s10278-024-00985-3","url":null,"abstract":"<p>Large language models (LLMs) have shown promise in accelerating radiology reporting by summarizing clinical findings into impressions. However, automatic impression generation for whole-body PET reports presents unique challenges and has received little attention. Our study aimed to evaluate whether LLMs can create clinically useful impressions for PET reporting. To this end, we fine-tuned twelve open-source language models on a corpus of 37,370 retrospective PET reports collected from our institution. All models were trained using the teacher-forcing algorithm, with the report findings and patient information as input and the original clinical impressions as reference. An extra input token encoded the reading physician’s identity, allowing models to learn physician-specific reporting styles. To compare the performances of different models, we computed various automatic evaluation metrics and benchmarked them against physician preferences, ultimately selecting PEGASUS as the top LLM. To evaluate its clinical utility, three nuclear medicine physicians assessed the PEGASUS-generated impressions and original clinical impressions across 6 quality dimensions (3-point scales) and an overall utility score (5-point scale). Each physician reviewed 12 of their own reports and 12 reports from other physicians. When physicians assessed LLM impressions generated in their own style, 89% were considered clinically acceptable, with a mean utility score of 4.08/5. On average, physicians rated these personalized impressions as comparable in overall utility to the impressions dictated by other physicians (4.03, P = 0.41). In summary, our study demonstrated that personalized impressions generated by PEGASUS were clinically useful in most cases, highlighting its potential to expedite PET reporting by automatically drafting impressions.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"245 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139669520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Class Deep Learning Model for Detecting Pediatric Distal Forearm Fractures Based on the AO/OTA Classification 基于 AO/OTA 分类检测小儿前臂远端骨折的多类深度学习模型
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-02-02 DOI: 10.1007/s10278-024-00968-4
Le Nguyen Binh, Nguyen Thanh Nhu, Vu Pham Thao Vy, Do Le Hoang Son, Truong Nguyen Khanh Hung, Nguyen Bach, Hoang Quoc Huy, Le Van Tuan, Nguyen Quoc Khanh Le, Jiunn-Horng Kang
{"title":"Multi-Class Deep Learning Model for Detecting Pediatric Distal Forearm Fractures Based on the AO/OTA Classification","authors":"Le Nguyen Binh, Nguyen Thanh Nhu, Vu Pham Thao Vy, Do Le Hoang Son, Truong Nguyen Khanh Hung, Nguyen Bach, Hoang Quoc Huy, Le Van Tuan, Nguyen Quoc Khanh Le, Jiunn-Horng Kang","doi":"10.1007/s10278-024-00968-4","DOIUrl":"https://doi.org/10.1007/s10278-024-00968-4","url":null,"abstract":"<p>Common pediatric distal forearm fractures necessitate precise detection. To support prompt treatment planning by clinicians, our study aimed to create a multi-class convolutional neural network (CNN) model for pediatric distal forearm fractures, guided by the AO Foundation/Orthopaedic Trauma Association (AO/ATO) classification system for pediatric fractures. The GRAZPEDWRI-DX dataset (2008–2018) of wrist X-ray images was used. We labeled images into four fracture classes (FRM, FUM, FRE, and FUE with F, fracture; R, radius; U, ulna; M, metaphysis; and E, epiphysis) based on the pediatric AO/ATO classification. We performed multi-class classification by training a YOLOv4-based CNN object detection model with 7006 images from 1809 patients (80% for training and 20% for validation). An 88-image test set from 34 patients was used to evaluate the model performance, which was then compared to the diagnosis performances of two readers—an orthopedist and a radiologist. The overall mean average precision levels on the validation set in four classes of the model were 0.97, 0.92, 0.95, and 0.94, respectively. On the test set, the model’s performance included sensitivities of 0.86, 0.71, 0.88, and 0.89; specificities of 0.88, 0.94, 0.97, and 0.98; and area under the curve (AUC) values of 0.87, 0.83, 0.93, and 0.94, respectively. The best performance among the three readers belonged to the radiologist, with a mean AUC of 0.922, followed by our model (0.892) and the orthopedist (0.830). Therefore, using the AO/OTA concept, our multi-class fracture detection model excelled in identifying pediatric distal forearm fractures.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"11 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139669516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of Mucosal Healing in Crohn’s Disease: Radiomics Models of Intestinal Wall and Mesenteric Fat Based on Dual-Energy CT 评估克罗恩病的黏膜愈合:基于双能量 CT 的肠壁和肠系膜脂肪放射组学模型
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-02-01 DOI: 10.1007/s10278-024-00989-z
{"title":"Evaluation of Mucosal Healing in Crohn’s Disease: Radiomics Models of Intestinal Wall and Mesenteric Fat Based on Dual-Energy CT","authors":"","doi":"10.1007/s10278-024-00989-z","DOIUrl":"https://doi.org/10.1007/s10278-024-00989-z","url":null,"abstract":"<h3>Abstract</h3> <p>This study aims to assess the effectiveness of radiomics signatures obtained from dual-energy computed tomography enterography (DECTE) in the evaluation of mucosal healing (MH) in patients diagnosed with Crohn’s disease (CD). In this study, 106 CD patients with a total of 221 diseased intestinal segments (79 with MH and 142 non-MH) from two medical centers were included and randomly divided into training and testing cohorts at a ratio of 7:3. Radiomics features were extracted from the enteric phase iodine maps and 40-kev and 70-kev virtual monoenergetic images (VMIs) of the diseased intestinal segments, as well as from mesenteric fat. Feature selection was performed using the least absolute shrinkage and selection operator (LASSO) logistic regression. Radiomics models were subsequently established, and the accuracy of these models in identifying MH in CD was assessed by calculating the area under the receiver operating characteristic curve (AUC). The combined-iodine model formulated by integrating the intestinal and mesenteric fat radiomics features of iodine maps exhibited the most favorable performance in evaluating MH, with AUCs of 0.989 (95% confidence interval (CI) 0.977–1.000) in the training cohort and 0.947 (95% CI 0.884–1.000) in the testing cohort. Patients categorized as high risk by the combined-iodine model displayed a greater probability of experiencing disease progression when contrasted with low-risk patients. The combined-iodine radiomics model, which is built upon iodine maps of diseased intestinal segments and mesenteric fat, has demonstrated promising performance in evaluating MH in CD patients.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"20 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139669648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impacts of Adaptive Statistical Iterative Reconstruction-V and Deep Learning Image Reconstruction Algorithms on Robustness of CT Radiomics Features: Opportunity for Minimizing Radiomics Variability Among Scans of Different Dose Levels 自适应统计迭代重建-V 和深度学习图像重建算法对 CT 放射组学特征鲁棒性的影响:最大限度降低不同剂量水平扫描的放射组学变异性的机会
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-29 DOI: 10.1007/s10278-023-00901-1
Jingyu Zhong, Zhiyuan Wu, Lingyun Wang, Yong Chen, Yihan Xia, Lan Wang, Jianying Li, Wei Lu, Xiaomeng Shi, Jianxing Feng, Haipeng Dong, Huan Zhang, Weiwu Yao
{"title":"Impacts of Adaptive Statistical Iterative Reconstruction-V and Deep Learning Image Reconstruction Algorithms on Robustness of CT Radiomics Features: Opportunity for Minimizing Radiomics Variability Among Scans of Different Dose Levels","authors":"Jingyu Zhong, Zhiyuan Wu, Lingyun Wang, Yong Chen, Yihan Xia, Lan Wang, Jianying Li, Wei Lu, Xiaomeng Shi, Jianxing Feng, Haipeng Dong, Huan Zhang, Weiwu Yao","doi":"10.1007/s10278-023-00901-1","DOIUrl":"https://doi.org/10.1007/s10278-023-00901-1","url":null,"abstract":"<p>This study aims to investigate the influence of adaptive statistical iterative reconstruction-V (ASIR-V) and deep learning image reconstruction (DLIR) on CT radiomics feature robustness. A standardized phantom was scanned under single-energy CT (SECT) and dual-energy CT (DECT) modes at standard and low (20 and 10 mGy) dose levels. Images of SECT 120 kVp and corresponding DECT 120 kVp-like virtual monochromatic images were generated with filtered back-projection (FBP), ASIR-V at 40% (AV-40) and 100% (AV-100) blending levels, and DLIR algorithm at low (DLIR-L), medium (DLIR-M), and high (DLIR-H) strength levels. Ninety-four features were extracted via Pyradiomics. Reproducibility of features was calculated between standard and low dose levels, between reconstruction algorithms in reference to FBP images, and within scan mode, using intraclass correlation coefficient (ICC) and concordance correlation coefficient (CCC). The average percentage of features with ICC &gt; 0.90 and CCC &gt; 0.90 between the two dose levels was 21.28% and 20.75% in AV-40 images, and 39.90% and 35.11% in AV-100 images, respectively, and increased from 15.43 to 45.22% and from 15.43 to 44.15% with an increasing strength level of DLIR. The average percentage of features with ICC &gt; 0.90 and CCC &gt; 0.90 in reference to FBP images was 26.07% and 25.80% in AV-40 images, and 18.88% and 18.62% in AV-100 images, respectively, and decreased from 27.93 to 17.82% and from 27.66 to 17.29% with an increasing strength level of DLIR. DLIR and ASIR-V algorithms showed low reproducibility in reference to FBP images, while the high-strength DLIR algorithm provides an opportunity for minimizing radiomics variability due to dose reduction.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"9 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139586136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Automatic Framework for Nasal Esthetic Assessment by ResNet Convolutional Neural Network 利用 ResNet 卷积神经网络进行鼻腔美学评估的自动框架
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-29 DOI: 10.1007/s10278-024-00973-7
{"title":"An Automatic Framework for Nasal Esthetic Assessment by ResNet Convolutional Neural Network","authors":"","doi":"10.1007/s10278-024-00973-7","DOIUrl":"https://doi.org/10.1007/s10278-024-00973-7","url":null,"abstract":"<h3>Abstract</h3> <p>Nasal base aesthetics is an interesting and challenging issue that attracts the attention of researchers in recent years. With that insight, in this study, we propose a novel automatic framework (AF) for evaluating the nasal base which can be useful to improve the symmetry in rhinoplasty and reconstruction. The introduced AF includes a hybrid model for nasal base landmarks recognition and a combined model for predicting nasal base symmetry. The proposed state-of-the-art nasal base landmark detection model is trained on the nasal base images for comprehensive qualitative and quantitative assessments. Then, the deep convolutional neural networks (CNN) and multi-layer perceptron neural network (MLP) models are integrated by concatenating their last hidden layer to evaluate the nasal base symmetry based on geometry features and tiled images of the nasal base. This study explores the concept of data augmentation by applying the methods motivated via commonly used image augmentation techniques. According to the experimental findings, the results of the AF are closely related to the otolaryngologists’ ratings and are useful for preoperative planning, intraoperative decision-making, and postoperative assessment. Furthermore, the visualization indicates that the proposed AF is capable of predicting the nasal base symmetry and capturing asymmetry areas to facilitate semantic predictions. The codes are accessible at https://github.com/AshooriMaryam/Nasal-Aesthetic-Assessment-Deep-learning.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"28 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139586393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Review of the Free Research Software for Computer-Assisted Interventions 计算机辅助干预免费研究软件回顾
IF 4.4 2区 工程技术
Journal of Digital Imaging Pub Date : 2024-01-29 DOI: 10.1007/s10278-023-00912-y
Zaiba Amla, Parminder Singh Khehra, Ashley Mathialagan, Elodie Lugez
{"title":"Review of the Free Research Software for Computer-Assisted Interventions","authors":"Zaiba Amla, Parminder Singh Khehra, Ashley Mathialagan, Elodie Lugez","doi":"10.1007/s10278-023-00912-y","DOIUrl":"https://doi.org/10.1007/s10278-023-00912-y","url":null,"abstract":"<p>Research software is continuously developed to facilitate progress and innovation in the medical field. Over time, numerous research software programs have been created, making it challenging to keep abreast of what is available. This work aims to evaluate the most frequently utilized software by the computer-assisted intervention (CAI) research community. The software assessments encompass a range of criteria, including load time, stress load, multi-tasking, extensibility and range of functionalities, user-friendliness, documentation, and technical support. A total of eight software programs were selected: 3D Slicer, Elastix, ITK-SNAP, MedInria, MeVisLab, MIPAV, and Seg3D. While none of the software was found to be perfect on all evaluation criteria, 3D Slicer and ITK-SNAP emerged with the highest rankings overall. These two software programs could frequently complement each other, as 3D Slicer has a broad and customizable range of features, while ITK-SNAP excels at performing fundamental tasks in an efficient manner. Nonetheless, each software had distinctive features that may better fit the requirements of certain research projects. This review provides valuable information to CAI researchers seeking the best-suited software to support their projects. The evaluation also offers insights for the software development teams, as it highlights areas where the software can be improved.</p>","PeriodicalId":50214,"journal":{"name":"Journal of Digital Imaging","volume":"10 1","pages":""},"PeriodicalIF":4.4,"publicationDate":"2024-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139586139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信