Eric K Lai, Evan Slavik, Bessie Ganim, Laurie A Perry, Caitlin Treuting, Troy Dee, Melissa Osborne, Cieara Presley, Alexander J Towbin
{"title":"Implementing a Photodocumentation Program.","authors":"Eric K Lai, Evan Slavik, Bessie Ganim, Laurie A Perry, Caitlin Treuting, Troy Dee, Melissa Osborne, Cieara Presley, Alexander J Towbin","doi":"10.1007/s10278-024-01236-1","DOIUrl":"10.1007/s10278-024-01236-1","url":null,"abstract":"<p><p>The widespread availability of smart devices has facilitated the use of medical photography, yet photodocumentation workflows are seldom implemented in healthcare organizations due to integration challenges with electronic health records (EHR) and standard clinical workflows. This manuscript details the implementation of a comprehensive photodocumentation workflow across all phases of care at a large healthcare organization, emphasizing efficiency and patient safety. From November 2018 to December 2023, healthcare workers at our institution uploaded nearly 32,000 photodocuments spanning 54 medical specialties. The photodocumentation process requires as few as 11 mouse clicks and keystrokes within the EHR and on smart devices. Automation played a crucial role in driving workflow efficiency and patient safety. For example, body part rules were used to automate the application of a sensitive label to photos of the face, chest, external genitalia, and buttocks. This automation was successful, with over 50% of the uploaded photodocuments being labeled as sensitive. Our implementation highlights the potential for standardizing photodocumentation workflows, thereby enhancing clinical documentation, improving patient care, and ensuring the secure handling of sensitive images.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"671-680"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950542/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142038686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Optimized Spatial Transformer for Segmenting Pancreas Abnormalities.","authors":"Banavathu Sridevi, B John Jaidhan","doi":"10.1007/s10278-024-01224-5","DOIUrl":"10.1007/s10278-024-01224-5","url":null,"abstract":"<p><p>The precise delineation of the pancreas from clinical images poses a substantial obstacle in the realm of medical image analysis and surgical procedures. Challenges arise from the complexities of clinical image analysis and complications in clinical practice related to the pancreas. To tackle these challenges, a novel approach called the Spatial Horned Lizard Attention Approach (SHLAM) has been developed. As a result, a preprocessing function has been developed to examine and eliminate noise barriers from the trained MRI data. Furthermore, an assessment of the current attributes is conducted, followed by the identification of essential elements for forecasting the impacted region. Once the affected region has been identified, the images undergo segmentation. Furthermore, it is crucial to emphasize that the present study assigns 80% of the data for training and 20% for testing purposes. The optimal parameters were assessed based on precision, accuracy, recall, F-measure, error rate, Dice, and Jaccard. The performance improvement has been demonstrated by validating the method on various existing models. The SHLAM method proposed demonstrated an accuracy rate of 99.6%, surpassing that of all alternative methods.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"931-945"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950475/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142127895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Weiguo Cao, Marc J Pomeroy, Zhengrong Liang, Yongfeng Gao, Yongyi Shi, Jiaxing Tan, Fangfang Han, Jing Wang, Jianhua Ma, Hongbin Lu, Almas F Abbasi, Perry J Pickhardt
{"title":"Lesion Classification by Model-Based Feature Extraction: A Differential Affine Invariant Model of Soft Tissue Elasticity in CT Images.","authors":"Weiguo Cao, Marc J Pomeroy, Zhengrong Liang, Yongfeng Gao, Yongyi Shi, Jiaxing Tan, Fangfang Han, Jing Wang, Jianhua Ma, Hongbin Lu, Almas F Abbasi, Perry J Pickhardt","doi":"10.1007/s10278-024-01178-8","DOIUrl":"10.1007/s10278-024-01178-8","url":null,"abstract":"<p><p>The elasticity of soft tissues has been widely considered a characteristic property for differentiation of healthy and lesions and, therefore, motivated the development of several elasticity imaging modalities, for example, ultrasound elastography, magnetic resonance elastography, and optical coherence elastography to directly measure the tissue elasticity. This paper proposes an alternative approach of modeling the elasticity for prior knowledge-based extraction of tissue elastic characteristic features for machine learning (ML) lesion classification using computed tomography (CT) imaging modality. The model describes a dynamic non-rigid (or elastic) soft tissue deformation in differential manifold to mimic the tissues' elasticity under wave fluctuation in vivo. Based on the model, a local deformation invariant is formulated using the 1<sup>st</sup> and 2<sup>nd</sup> order derivatives of the lesion volumetric CT image and used to generate elastic feature map of the lesion volume. From the feature map, tissue elastic features are extracted and fed to ML to perform lesion classification. Two pathologically proven image datasets of colon polyps and lung nodules were used to test the modeling strategy. The outcomes reached the score of area under the curve of receiver operating characteristics of 94.2% for the polyps and 87.4% for the nodules, resulting in an average gain of 5 to 20% over several existing state-of-the-art image feature-based lesion classification methods. The gain demonstrates the importance of extracting tissue characteristic features for lesion classification, instead of extracting image features, which can include various image artifacts and may vary for different protocols in image acquisition and different imaging modalities.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"804-818"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950485/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142010199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EAAC-Net: An Efficient Adaptive Attention and Convolution Fusion Network for Skin Lesion Segmentation.","authors":"Chao Fan, Zhentong Zhu, Bincheng Peng, Zhihui Xuan, Xinru Zhu","doi":"10.1007/s10278-024-01223-6","DOIUrl":"10.1007/s10278-024-01223-6","url":null,"abstract":"<p><p>Accurate segmentation of skin lesions in dermoscopic images is of key importance for quantitative analysis of melanoma. Although existing medical image segmentation methods significantly improve skin lesion segmentation, they still have limitations in extracting local features with global information, do not handle challenging lesions well, and usually have a large number of parameters and high computational complexity. To address these issues, this paper proposes an efficient adaptive attention and convolutional fusion network for skin lesion segmentation (EAAC-Net). We designed two parallel encoders, where the efficient adaptive attention feature extraction module (EAAM) adaptively establishes global spatial dependence and global channel dependence by constructing the adjacency matrix of the directed graph and can adaptively filter out the least relevant tokens at the coarse-grained region level, thus reducing the computational complexity of the self-attention mechanism. The efficient multiscale attention-based convolution module (EMA⋅C) utilizes multiscale attention for cross-space learning of local features extracted from the convolutional layer to enhance the representation of richly detailed local features. In addition, we designed a reverse attention feature fusion module (RAFM) to enhance the effective boundary information gradually. To validate the performance of our proposed network, we compared it with other methods on ISIC 2016, ISIC 2018, and PH<sup>2</sup> public datasets, and the experimental results show that EAAC-Net has superior segmentation performance under commonly used evaluation metrics.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1120-1136"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950606/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141989826","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Correction: Certified Imaging Informatics Professionals (CIIP) Demonstrate Value to the Healthcare Industry and Focus on Quality Through the ABII 10-Year Requirements Practice Option.","authors":"Ameena Elahi, Nikki Fennell, Liana Watson","doi":"10.1007/s10278-024-01246-z","DOIUrl":"10.1007/s10278-024-01246-z","url":null,"abstract":"","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1280-1281"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142134997","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Peter Kamel, Adway Kanhere, Pranav Kulkarni, Mazhar Khalid, Rachel Steger, Uttam Bodanapally, Dheeraj Gandhi, Vishwa Parekh, Paul H Yi
{"title":"Optimizing Acute Stroke Segmentation on MRI Using Deep Learning: Self-Configuring Neural Networks Provide High Performance Using Only DWI Sequences.","authors":"Peter Kamel, Adway Kanhere, Pranav Kulkarni, Mazhar Khalid, Rachel Steger, Uttam Bodanapally, Dheeraj Gandhi, Vishwa Parekh, Paul H Yi","doi":"10.1007/s10278-024-00994-2","DOIUrl":"10.1007/s10278-024-00994-2","url":null,"abstract":"<p><p>Segmentation of infarcts is clinically important in ischemic stroke management and prognostication. It is unclear what role the combination of DWI, ADC, and FLAIR MRI sequences provide for deep learning in infarct segmentation. Recent technologies in model self-configuration have promised greater performance and generalizability through automated optimization. We assessed the utility of DWI, ADC, and FLAIR sequences on ischemic stroke segmentation, compared self-configuring nnU-Net models to conventional U-Net models without manual optimization, and evaluated the generalizability of results on an external clinical dataset. 3D self-configuring nnU-Net models and standard 3D U-Net models with MONAI were trained on 200 infarcts using DWI, ADC, and FLAIR sequences separately and in all combinations. Segmentation results were compared between models using paired t-test comparison on a hold-out test set of 50 cases. The highest performing model was externally validated on a clinical dataset of 50 MRIs. nnU-Net with DWI sequences attained a Dice score of 0.810 ± 0.155. There was no statistically significant difference when DWI sequences were supplemented with ADC and FLAIR images (Dice score of 0.813 ± 0.150; p = 0.15). nnU-Net models significantly outperformed standard U-Net models for all sequence combinations (p < 0.001). On the external dataset, Dice scores measured 0.704 ± 0.199 for positive cases with false positives with intracranial hemorrhage. Highly optimized neural networks such as nnU-Net provide excellent stroke segmentation even when only provided DWI images, without significant improvement from other sequences. This differs from-and significantly outperforms-standard U-Net architectures. Results translated well to the external clinical environment and provide the groundwork for optimized acute stroke segmentation on MRI.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"717-726"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950494/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feature-Based vs. Deep-Learning Fusion Methods for the In Vivo Detection of Radiation Dermatitis Using Optical Coherence Tomography, a Feasibility Study.","authors":"Christos Photiou, Constantina Cloconi, Iosif Strouthos","doi":"10.1007/s10278-024-01241-4","DOIUrl":"10.1007/s10278-024-01241-4","url":null,"abstract":"<p><p>Acute radiation dermatitis (ARD) is a common and distressing issue for cancer patients undergoing radiation therapy, leading to significant morbidity. Despite available treatments, ARD remains a distressing issue, necessitating further research to improve prevention and management strategies. Moreover, the lack of biomarkers for early quantitative assessment of ARD impedes progress in this area. This study aims to investigate the detection of ARD using intensity-based and novel features of Optical Coherence Tomography (OCT) images, combined with machine learning. Imaging sessions were conducted twice weekly on twenty-two patients at six neck locations throughout their radiation treatment, with ARD severity graded by an expert oncologist. We compared a traditional feature-based machine learning technique with a deep learning late-fusion approach to classify normal skin vs. ARD using a dataset of 1487 images. The dataset analysis demonstrates that the deep learning approach outperformed traditional machine learning, achieving an accuracy of 88%. These findings offer a promising foundation for future research aimed at developing a quantitative assessment tool to enhance the management of ARD.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"1137-1146"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950469/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142135010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ensemble of Deep Learning Architectures with Machine Learning for Pneumonia Classification Using Chest X-rays.","authors":"Rupali Vyas, Deepak Rao Khadatkar","doi":"10.1007/s10278-024-01201-y","DOIUrl":"10.1007/s10278-024-01201-y","url":null,"abstract":"<p><p>Pneumonia is a severe health concern, particularly for vulnerable groups, needing early and correct classification for optimal treatment. This study addresses the use of deep learning combined with machine learning classifiers (DLxMLCs) for pneumonia classification from chest X-ray (CXR) images. We deployed modified VGG19, ResNet50V2, and DenseNet121 models for feature extraction, followed by five machine learning classifiers (logistic regression, support vector machine, decision tree, random forest, artificial neural network). The approach we suggested displayed remarkable accuracy, with VGG19 and DenseNet121 models obtaining 99.98% accuracy when combined with random forest or decision tree classifiers. ResNet50V2 achieved 99.25% accuracy with random forest. These results illustrate the advantages of merging deep learning models with machine learning classifiers in boosting the speedy and accurate identification of pneumonia. The study underlines the potential of DLxMLC systems in enhancing diagnostic accuracy and efficiency. By integrating these models into clinical practice, healthcare practitioners could greatly boost patient care and results. Future research should focus on refining these models and exploring their application to other medical imaging tasks, as well as including explainability methodologies to better understand their decision-making processes and build trust in their clinical use. This technique promises promising breakthroughs in medical imaging and patient management.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"727-746"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950602/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine learning-based model assists in differentiating Mycobacterium avium Complex Pulmonary Disease from Pulmonary Tuberculosis: A Multicenter Study.","authors":"Jiacheng Zhang, Tingting Huang, Xu He, Dingsheng Han, Qian Xu, Fukun Shi, Lan Zhang, Dailun Hou","doi":"10.1007/s10278-025-01486-7","DOIUrl":"https://doi.org/10.1007/s10278-025-01486-7","url":null,"abstract":"<p><p>The number of Mycobacterium avium-intracellulare complex pulmonary disease patients is increasing globally. Distinguishing Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis is difficult due to similar manifestations and characteristics. We aimed to build and validate a machine learning model using clinical data and computed tomography features to differentiate them. This multi-centered, retrospective study included 169 patients diagnosed with Mycobacterium avium-intracellulare complex and pulmonary tuberculosis from date to date. Data were analyzed, and logistic regression, random forest, and support vector machine models were established and validated. Performance was evaluated using receiver operating characteristic and precision-recall curves. In total, 84 patients with Mycobacterium avium-intracellulare complex pulmonary disease and 85 with pulmonary tuberculosis were analyzed. Patients with Mycobacterium avium-intracellulare complex pulmonary disease were older. Hemoptysis rate, cavity number and morphology, bronchiectasis type, and distribution differed. The support vector machine model performed better. In the training set, the area under the curve was 0.960, and in the validation set it was 0.885. The precision-recall curve showed high accuracy and low recall for the support vector machine model. The support vector machine learning-based model, which integrates clinical data and computed tomography imaging features, exhibited excellent diagnostic performance and can assist in differentiating Mycobacterium avium-intracellulare complex pulmonary disease from pulmonary tuberculosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143766303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Feasibility of Three-Dimension Chemical Exchange Saturation Transfer MRI for Predicting Tumor and Node Staging in Rectal Adenocarcinoma: An Exploration of Optimal ROI Measurement.","authors":"Xiao Wang, Wenguang Liu, Ismail Bilal Masokano, Weiyin Vivian Liu, Yigang Pei, Wenzheng Li","doi":"10.1007/s10278-024-01029-6","DOIUrl":"10.1007/s10278-024-01029-6","url":null,"abstract":"<p><p>To investigate the feasibility of predicting rectal adenocarcinoma (RA) tumor (T) and node (N) staging from an optimal ROI measurement using amide proton transfer weighted-signal intensity (APTw-SI) and magnetization transfer (MT) derived from three-dimensional chemical exchange saturation transfer(3D-CEST). Fifty-eight RA patients with pathological TN staging underwent 3D-CEST and DWI. APTw-SI, MT, and ADC values were measured using three ROI approaches (ss-ROI, ts-ROI, and wt-ROI) to analyze the TN staging (T staging, T1-2 vs T3-4; N staging, N - vs N +); the reproducibility of APTw-SI and MT was also evaluated. The AUC was used to assess the staging performance and determine the optimal ROI strategy. MT and APTw-SI yielded good excellent reproducibility with three ROIs, respectively. Significant differences in MT were observed (all P < 0.05) from various ROIs but not in APTw-SI and ADC (all P > 0.05) in the TN stage. AUCs of MT from ss-ROI were 0.860 (95% CI, 0.743-0.937) and 0.852 (95% CI, 0.735-0.932) for predicting T and N staging, which is similar to ts-ROI (T staging, 0.856 [95% CI, 0.739-0.934]; N staging, 0.831 [95% CI, 0.710-0.917]) and wt-ROI (T staging, 0.833 [95% CI, 0.712-0.918]; N staging, 0.848 [95% CI, 0.729-0.929]) (all P > 0.05). MT value of 3D-CEST has excellent TN staging predictive performance in RA patients with all three kinds of ROI methods. The ss-ROI is easy to operate and could be served as the preferred ROI approach for clinical and research applications of 3D-CEST imaging.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":"946-956"},"PeriodicalIF":0.0,"publicationDate":"2025-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11950466/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142142304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}