Journal of imaging informatics in medicine最新文献

筛选
英文 中文
Classification of Interventional Radiology Reports into Technique Categories with a Fine-Tuned Large Language Model.
Journal of imaging informatics in medicine Pub Date : 2024-12-13 DOI: 10.1007/s10278-024-01370-w
Koichiro Yasaka, Takuto Nomura, Jun Kamohara, Hiroshi Hirakawa, Takatoshi Kubo, Shigeru Kiryu, Osamu Abe
{"title":"Classification of Interventional Radiology Reports into Technique Categories with a Fine-Tuned Large Language Model.","authors":"Koichiro Yasaka, Takuto Nomura, Jun Kamohara, Hiroshi Hirakawa, Takatoshi Kubo, Shigeru Kiryu, Osamu Abe","doi":"10.1007/s10278-024-01370-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01370-w","url":null,"abstract":"<p><p>The aim of this study is to develop a fine-tuned large language model that classifies interventional radiology reports into technique categories and to compare its performance with readers. This retrospective study included 3198 patients (1758 males and 1440 females; age, 62.8 ± 16.8 years) who underwent interventional radiology from January 2018 to July 2024. Training, validation, and test datasets involved 2292, 250, and 656 patients, respectively. Input data involved texts in clinical indication, imaging diagnosis, and image-finding sections of interventional radiology reports. Manually classified technique categories (15 categories in total) were utilized as reference data. Fine-tuning of the Bidirectional Encoder Representations model was performed using training and validation datasets. This process was repeated 15 times due to the randomness of the learning process. The best-performed model, which showed the highest accuracy among 15 trials, was selected to further evaluate its performance in the independent test dataset. The report classification involved one radiologist (reader 1) and two radiology residents (readers 2 and 3). The accuracy and macrosensitivity (average of each category's sensitivity) of the best-performed model in the validation dataset were 0.996 and 0.994, respectively. For the test dataset, the accuracy/macrosensitivity were 0.988/0.980, 0.986/0.977, 0.989/0.979, and 0.988/0.980 in the best model, reader 1, reader 2, and reader 3, respectively. The model required 0.178 s required for classification per patient, which was 17.5-19.9 times faster than readers. In conclusion, fine-tuned large language model classified interventional radiology reports into technique categories with high accuracy similar to readers within a remarkably shorter time.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Single-View Fluoroscopic X-Ray Pose Estimation: A Comparison of Alternative Loss Functions and Volumetric Scene Representations. 单视角透视 X 光姿势估计:替代损失函数和体积场景表示法的比较。
Journal of imaging informatics in medicine Pub Date : 2024-12-13 DOI: 10.1007/s10278-024-01354-w
Chaochao Zhou, Syed Hasib Akhter Faruqui, Dayeong An, Abhinav Patel, Ramez N Abdalla, Michael C Hurley, Ali Shaibani, Matthew B Potts, Babak S Jahromi, Sameer A Ansari, Donald R Cantrell
{"title":"Single-View Fluoroscopic X-Ray Pose Estimation: A Comparison of Alternative Loss Functions and Volumetric Scene Representations.","authors":"Chaochao Zhou, Syed Hasib Akhter Faruqui, Dayeong An, Abhinav Patel, Ramez N Abdalla, Michael C Hurley, Ali Shaibani, Matthew B Potts, Babak S Jahromi, Sameer A Ansari, Donald R Cantrell","doi":"10.1007/s10278-024-01354-w","DOIUrl":"https://doi.org/10.1007/s10278-024-01354-w","url":null,"abstract":"<p><p>Many tasks performed in image-guided procedures can be cast as pose estimation problems, where specific projections are chosen to reach a target in 3D space. In this study, we construct a framework for fluoroscopic pose estimation and compare alternative loss functions and volumetric scene representations. We first develop a differentiable projection (DiffProj) algorithm for the efficient computation of Digitally Reconstructed Radiographs (DRRs) from either Cone-Beam Computerized Tomography (CBCT) or neural scene representations. We introduce two innovative neural scene representations, Neural Tuned Tomography (NeTT) and masked Neural Radiance Fields (mNeRF). Pose estimation is then performed within the framework by iterative gradient descent using loss functions that quantify the image discrepancy of the synthesized DRR with respect to the ground-truth, target fluoroscopic X-ray image. We compared alternative loss functions and volumetric scene representations for pose estimation using a dataset consisting of 50 cranial tomographic X-ray sequences. We find that Mutual Information significantly outperforms alternative loss functions for pose estimation, avoiding entrapment in local optima. The alternative discrete (CBCT) and neural (NeTT and mNeRF) volumetric scene representations yield comparable performance (3D angle errors, mean ≤ 3.2° and 90% quantile ≤ 3.4°); however, the neural scene representations incur a considerable computational expense to train.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Periapical Index Score Classification System in Periapical Radiographs Using Deep Learning.
Journal of imaging informatics in medicine Pub Date : 2024-12-13 DOI: 10.1007/s10278-024-01360-y
Natdanai Hirata, Panupong Pudhieng, Sadanan Sena, Suebpong Torn-Asa, Wannakamon Panyarak, Kittipit Klanliang, Kittichai Wantanajittikul
{"title":"Development of Periapical Index Score Classification System in Periapical Radiographs Using Deep Learning.","authors":"Natdanai Hirata, Panupong Pudhieng, Sadanan Sena, Suebpong Torn-Asa, Wannakamon Panyarak, Kittipit Klanliang, Kittichai Wantanajittikul","doi":"10.1007/s10278-024-01360-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01360-y","url":null,"abstract":"<p><p>Periapical index (PAI) scoring system is the most popular index for evaluating apical periodontitis (AP) on radiographs. It provides an ordinal scale of 1 to 5, ranging from healthy to severe AP. Scoring PAI is a time-consuming process and requires experienced dentists; thus, deep learning has been applied to hasten the process. However, most models failed to score the early stage of AP or the score 2 accurately since it shares very similar characteristics with its adjacent scores. In this study, we developed and compared binary classification methods for PAI scores which were normality classification method and health-disease classification method. The normality classification method classified PAI score 1 as Normal and Abnormal for the rest of the scores while the health-disease method classified PAI scores 1 and 2 as Healthy and Diseased for the rest of the scores. A total of 2266 periapical root areas (PRAs) from 520 periapical radiographs (Pas) were selected and scored by experts. GoogLeNet, AlexNet, and ResNet convolutional neural networks (CNNs) were used in this study. Trained models' performances were evaluated and then compared. The models in the normality classification method achieved the highest accuracy of 75.00%, while the health-disease method models performed better with the highest accuracy of 83.33%. In conclusion, CNN models performed better in classification when grouping PAI scores 1 and 2 together in the same class, supporting the health-disease PAI scoring usage in clinical practice.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised Ensemble Learning for Automatic Interpretation of Lung Ultrasound Videos.
Journal of imaging informatics in medicine Pub Date : 2024-12-13 DOI: 10.1007/s10278-024-01344-y
Bárbara Malainho, João Freitas, Catarina Rodrigues, Ana Claudia Tonelli, André Santanchè, Marco A Carvalho-Filho, Jaime C Fonseca, Sandro Queirós
{"title":"Semi-supervised Ensemble Learning for Automatic Interpretation of Lung Ultrasound Videos.","authors":"Bárbara Malainho, João Freitas, Catarina Rodrigues, Ana Claudia Tonelli, André Santanchè, Marco A Carvalho-Filho, Jaime C Fonseca, Sandro Queirós","doi":"10.1007/s10278-024-01344-y","DOIUrl":"https://doi.org/10.1007/s10278-024-01344-y","url":null,"abstract":"<p><p>Point-of-care ultrasound (POCUS) stands as a safe, portable, and cost-effective imaging modality for swift bedside patient examinations. Specifically, lung ultrasonography (LUS) has proven useful in evaluating both acute and chronic pulmonary conditions. Despite its clinical value, automatic LUS interpretation remains relatively unexplored, particularly in multi-label contexts. This work proposes a novel deep learning (DL) framework tailored for interpreting lung POCUS videos, whose outputs are the finding(s) present in these videos (such as A-lines, B-lines, or consolidations). The pipeline, based on a residual (2+1)D architecture, initiates with a pre-processing routine for video masking and standardisation, and employs a semi-supervised approach to harness available unlabeled data. Additionally, we introduce an ensemble modeling strategy that aggregates outputs from models trained to predict distinct label sets, thereby leveraging the hierarchical nature of LUS findings. The proposed framework and its building blocks were evaluated through extensive experiments with both multi-class and multi-label models, highlighting its versatility. In a held-out test set, the categorical proposal, suited for expedite triage, achieved an average F1-score of 92.4%, while the multi-label proposal, helpful for patient management and referral, achieved an average F1-score of 70.5% across five relevant LUS findings. Overall, the semi-supervised methodology contributed significantly to improved performance, while the proposed hierarchy-aware ensemble provided moderate additional gains.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823044","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Diagnosing Respiratory Variability: Convolutional Neural Networks for Chest X-ray Classification Across Diverse Pulmonary Conditions. 诊断呼吸变异:卷积神经网络用于不同肺部疾病的胸部 X 光片分类。
Journal of imaging informatics in medicine Pub Date : 2024-12-13 DOI: 10.1007/s10278-024-01355-9
Rajesh Kancherla, Anju Sharma, Prabha Garg
{"title":"Diagnosing Respiratory Variability: Convolutional Neural Networks for Chest X-ray Classification Across Diverse Pulmonary Conditions.","authors":"Rajesh Kancherla, Anju Sharma, Prabha Garg","doi":"10.1007/s10278-024-01355-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01355-9","url":null,"abstract":"<p><p>The global burden of lung diseases is a pressing issue, particularly in developing nations with limited healthcare access. Accurate diagnosis of lung conditions is crucial for effective treatment, but diagnosing lung ailments using medical imaging techniques like chest radiograph images and CT scans is challenging due to the complex anatomical intricacies of the lungs. Deep learning methods, particularly convolutional neural networks (CNN), offer promising solutions for automated disease classification using imaging data. This research has the potential to significantly improve healthcare access in developing countries with limited medical resources, providing hope for better diagnosis and treatment of lung diseases. The study employed a diverse range of CNN models for training, including a baseline model and transfer learning models such as VGG16, VGG19, InceptionV3, and ResNet50. The models were trained using image datasets sourced from the NIH and COVID-19 repositories containing 8000 chest radiograph images depicting four lung conditions (lung opacity, COVID-19, pneumonia, and pneumothorax) and 2000 healthy chest radiograph images, with a ten-fold cross-validation approach. The VGG19-based model outperformed the baseline model in diagnosing lung diseases with an average accuracy of 0.995 and 0.996 on validation and external test datasets. The proposed model also outperformed published lung-disease prediction models; these findings underscore the superior performance of the VGG19 model compared to other architectures in accurately classifying and detecting lung diseases from chest radiograph images. This study highlights AI's potential, especially CNNs like VGG19, in improving diagnostic accuracy for lung disorders, promising better healthcare outcomes. The predictive model is available on GitHub at https://github.com/PGlab-NIPER/Lung_disease_classification .</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142823040","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CACs Recognition of FISH Images Based on Adaptive Mean Teacher Semi-supervised Learning with Domain-Knowledge Pseudo Label.
Journal of imaging informatics in medicine Pub Date : 2024-12-12 DOI: 10.1007/s10278-024-01348-8
Yuqing Weng, Qiuping Hu, Huajia Wang, Yinglan Kuang, Yanling Zhou, Yuyan Tang, Lei Wang, Xin Ye, Xing Lu
{"title":"CACs Recognition of FISH Images Based on Adaptive Mean Teacher Semi-supervised Learning with Domain-Knowledge Pseudo Label.","authors":"Yuqing Weng, Qiuping Hu, Huajia Wang, Yinglan Kuang, Yanling Zhou, Yuyan Tang, Lei Wang, Xin Ye, Xing Lu","doi":"10.1007/s10278-024-01348-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01348-8","url":null,"abstract":"<p><p>Circulating genetically abnormal cells (CACs) serve as crucial biomarkers for lung cancer diagnosis. Detecting CACs holds great value for early diagnosis and screening of lung cancer. To aid the identification of CACs, we have incorporated deep learning algorithms into our CACs detection system, specifically developing algorithms for cell segmentation and signal point detection. However, it is noteworthy that deep learning algorithms require extensive data labeling. Consequently, this study introduces a semi-supervised learning algorithm for CACs detection. For the cell segmentation task, a combination of self-training and Mean Teacher method was adopted in the semi-supervised training cell segmentation task. Furthermore, an Adaptive Mean Teacher approach was developed based on the Mean Teacher to enhance the effectiveness of semi-supervised cell segmentation. Regarding the signal point detection task, an end-to-end semi-supervised signal point detection algorithm was developed using the Adaptive Mean Teacher as the paradigm, and a Domain-Knowledge Pseudo Label was developed to improve the quality of pseudo-labeling and further enhance signal point detection. By incorporating semi-supervised training in both sub-tasks, the reliance on labeled data is reduced, thereby improving the performance of CACs detection. Our proposed semi-supervised method has achieved good results in cell segmentation tasks, signal point detection tasks, and the final CACs detection task. In the final CACs detection task, with 2%, 5%, and 10% of labeled data, our proposed semi-supervised method achieved 27.225%, 23.818%, and 4.513%, respectively. Experimental results demonstrated that the proposed method is effective.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142820516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards Automated Semantic Segmentation in Mammography Images for Enhanced Clinical Applications.
Journal of imaging informatics in medicine Pub Date : 2024-12-11 DOI: 10.1007/s10278-024-01364-8
Cesar A Sierra-Franco, Jan Hurtado, Victor de A Thomaz, Leonardo C da Cruz, Santiago V Silva, Greis Francy M Silva-Calpa, Alberto Raposo
{"title":"Towards Automated Semantic Segmentation in Mammography Images for Enhanced Clinical Applications.","authors":"Cesar A Sierra-Franco, Jan Hurtado, Victor de A Thomaz, Leonardo C da Cruz, Santiago V Silva, Greis Francy M Silva-Calpa, Alberto Raposo","doi":"10.1007/s10278-024-01364-8","DOIUrl":"https://doi.org/10.1007/s10278-024-01364-8","url":null,"abstract":"<p><p>Mammography images are widely used to detect non-palpable breast lesions or nodules, aiding in cancer prevention and enabling timely intervention when necessary. To support medical analysis, computer-aided detection systems can automate the segmentation of landmark structures, which is helpful in locating abnormalities and evaluating image acquisition adequacy. This paper presents a deep learning-based framework for segmenting the nipple, the pectoral muscle, the fibroglandular tissue, and the fatty tissue in standard-view mammography images. To the best of our knowledge, we introduce the largest dataset dedicated to mammography segmentation of key anatomical structures, specifically designed to train deep learning models for this task. Through comprehensive experiments, we evaluated various deep learning model architectures and training configurations, demonstrating robust segmentation performance across diverse and challenging cases. These results underscore the framework's potential for clinical integration. In our experiments, four semantic segmentation architectures were compared, all showing suitability for the target problem, thereby offering flexibility in model selection. Beyond segmentation, we introduce a suite of applications derived from this framework to assist in clinical assessments. These include automating tasks such as multi-view lesion registration and anatomical position estimation, evaluating image acquisition quality, measuring breast density, and enhancing visualization of breast tissues, thus addressing critical needs in breast cancer screening and diagnosis.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Neural Network for Segmenting Tumours in Ultrasound Rectal Images.
Journal of imaging informatics in medicine Pub Date : 2024-12-11 DOI: 10.1007/s10278-024-01358-6
Yuanxi Zhang, Xiwen Deng, Tingting Li, Yuan Li, Xiaohui Wang, Man Lu, Lifeng Yang
{"title":"A Neural Network for Segmenting Tumours in Ultrasound Rectal Images.","authors":"Yuanxi Zhang, Xiwen Deng, Tingting Li, Yuan Li, Xiaohui Wang, Man Lu, Lifeng Yang","doi":"10.1007/s10278-024-01358-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01358-6","url":null,"abstract":"<p><p>Ultrasound imaging is the most cost-effective approach for the early detection of rectal cancer, which is a high-risk cancer. Our goal was to design an effective method that can accurately identify and segment rectal tumours in ultrasound images, thereby facilitating rectal cancer diagnoses for physicians. This would allow physicians to devote more time to determining whether the tumour is benign or malignant and whether it has metastasized rather than merely confirming its presence. Data originated from the Sichuan Province Cancer Hospital. The test, training, and validation sets were composed of 53 patients with 173 images, 195 patients with 1247 images, and 20 patients with 87 images, respectively. We created a deep learning network architecture consisting of encoders and decoders. To enhance global information capture, we substituted traditional convolutional decoders with global attention decoders and incorporated effective channel information fusion for multiscale information integration. The Dice coefficient (DSC) of the proposed model was 75.49%, which was 4.03% greater than that of the benchmark model, and the Hausdorff distance 95(HD95) was 24.75, which was 8.43 lower than that of the benchmark model. The paired t-test statistically confirmed the significance of the difference between our model and the benchmark model, with a p-value less than 0.05. The proposed method effectively identifies and segments rectal tumours of diverse shapes. Furthermore, it distinguishes between normal rectal images and those containing tumours. Therefore, after consultation with physicians, we believe that our method can effectively assist physicians in diagnosing rectal tumours via ultrasound.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815456","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Combination of Deep and Statistical Features of the Tissue of Pathology Images to Classify and Diagnose the Degree of Malignancy of Prostate Cancer.
Journal of imaging informatics in medicine Pub Date : 2024-12-11 DOI: 10.1007/s10278-024-01363-9
Yan Gao, Mahsa Vali
{"title":"Combination of Deep and Statistical Features of the Tissue of Pathology Images to Classify and Diagnose the Degree of Malignancy of Prostate Cancer.","authors":"Yan Gao, Mahsa Vali","doi":"10.1007/s10278-024-01363-9","DOIUrl":"https://doi.org/10.1007/s10278-024-01363-9","url":null,"abstract":"<p><p>Prostate cancer is one of the most prevalent male-specific diseases, where early and accurate diagnosis is essential for effective treatment and preventing disease progression. Assessing disease severity involves analyzing histological tissue samples, which are graded from 1 (healthy) to 5 (severely malignant) based on pathological features. However, traditional manual grading is labor-intensive and prone to variability. This study addresses the challenge of automating prostate cancer classification by proposing a novel histological grade analysis approach. The method integrates the gray-level co-occurrence matrix (GLCM) for extracting texture features with Haar wavelet modification to enhance feature quality. A convolutional neural network (CNN) is then employed for robust classification. The proposed method was evaluated using statistical and performance metrics, achieving an average accuracy of 97.3%, a precision of 98%, and an AUC of 0.95. These results underscore the effectiveness of the approach in accurately categorizing prostate tissue grades. This study demonstrates the potential of automated classification methods to support pathologists, enhance diagnostic precision, and improve clinical outcomes in prostate cancer care.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
NETosis Genes and Pathomic Signature: A Novel Prognostic Marker for Ovarian Serous Cystadenocarcinoma.
Journal of imaging informatics in medicine Pub Date : 2024-12-11 DOI: 10.1007/s10278-024-01366-6
Feng Zhan, Yina Guo, Lidan He
{"title":"NETosis Genes and Pathomic Signature: A Novel Prognostic Marker for Ovarian Serous Cystadenocarcinoma.","authors":"Feng Zhan, Yina Guo, Lidan He","doi":"10.1007/s10278-024-01366-6","DOIUrl":"https://doi.org/10.1007/s10278-024-01366-6","url":null,"abstract":"<p><p>To evaluate the prognostic significance and molecular mechanism of NETosis markers in ovarian serous cystadenocarcinoma (OSC), we constructed a machine learning-based pathomic model utilizing hematoxylin and eosin (H&E) slides. We analyzed 333 patients with OSC from The Cancer Genome Atlas for prognostic-related neutrophil extracellular trap formation (NETosis) genes through bioinformatics analysis. Pathomic features were extracted from 54 cases with complete pathological images, genetic matrices, and clinical information. Two pathomic prognostic models were constructed using support vector machine (SVM) and logistic regression (LR) algorithms. Additionally, we established a predictive scoring system that integrated pathomic scores based on the NETcluster subtypes and clinical signature. We identified four NETosis genes significantly correlated with OSC prognosis, which were functionally associated with immune response, somatic mutations, tumor invasion, and metastasis. Five robust pathomic features were selected for overall survival prediction. The LR and SVM pathomic models demonstrated strong predictive performance for the NETcluster subtype classification through five-fold cross-validation. Time-dependent ROC analysis revealed excellent prognostic capability of the LR pathomic model's score for the overall survival (AUC values of 0.658, 0.761, and 0.735 at 36, 48, and 60 months, respectively), further validated by Kaplan-Meier analysis. The expression levels of NETosis genes greatly affected OSC patients' prognoses. The pathomic analysis of H&E slide pathological images provides an effective approach for predicting both NETcluster subtype and overall survival in OSC patients.</p>","PeriodicalId":516858,"journal":{"name":"Journal of imaging informatics in medicine","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142815460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信