Visual Computing for Industry Biomedicine and Art最新文献

筛选
英文 中文
Development and validation of a machine learning model for predicting venous thromboembolism complications following colorectal cancer surgery. 预测结直肠癌手术后静脉血栓栓塞并发症的机器学习模型的开发和验证。
IF 6 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-09-12 DOI: 10.1186/s42492-025-00204-y
Zongsheng Sun, Di Hao, Mingyu Yang, Wenzhi Wu, Hanhui Jing, Zhensong Yang, Anbang Sun, Wentao Xie, Longbo Zheng, Xixun Wang, Dongsheng Wang, Yun Lu, Guangye Tian, Shanglong Liu
{"title":"Development and validation of a machine learning model for predicting venous thromboembolism complications following colorectal cancer surgery.","authors":"Zongsheng Sun, Di Hao, Mingyu Yang, Wenzhi Wu, Hanhui Jing, Zhensong Yang, Anbang Sun, Wentao Xie, Longbo Zheng, Xixun Wang, Dongsheng Wang, Yun Lu, Guangye Tian, Shanglong Liu","doi":"10.1186/s42492-025-00204-y","DOIUrl":"10.1186/s42492-025-00204-y","url":null,"abstract":"<p><p>Postoperative venous thromboembolism (VTE) in colorectal cancer (CRC) patients undergoing surgery results in poor prognosis. However, there are no effective tools for early screening and predicting VTE. In this study, we developed a machine learning (ML)-based model for predicting the risk of VTE following CRC surgery and tested its performance using an external dataset. A total of 3227 CRC surgery patients were enrolled from the Affiliated Hospital of Qingdao University and Yantai Yuhuangding Hospital (from January 2016 to December 2023). Subsequently, 1596 patients from the Affiliated Hospital of Qingdao University were assigned to the training set, and 716 patients from Yantai Yuhuangding Hospital were assigned to the external validation set. A model was developed and trained using six ML algorithms using the stacking ensemble technique. Moreover, all models were developed using the tenfold cross-validation on the training set, and their performance was tested using an independent external validation set. In the training set, 173 (10.8%) patients developed VTE, 163 (10.2%) patients experienced deep venous thrombosis, and 29 (1.82%) cases had pulmonary embolism (PE). In the external validation set, 85 (11.9%) cases of VTE, 83 (11.6%) cases of deep vein thrombosis, and 14 (1.96%) cases of PE were recorded. The analysis revealed that the stacking model outperformed all other models in the external validation set, achieving significantly better performance in all metrics: the area under the receiver operating characteristic curve = 0.840 (0.790-0.887), accuracy = 0.810 (0.783-0.836), specificity = 0.819 (0.790-0.848), sensitivity = 0.741 (0.652-0.825), and recall = 0.959 (0.942-0.975). The stacking model for surgical CRC patients shows promise in enabling timely clinical detection of high-risk cases. This method facilitates the prioritized implementation of prophylactic anticoagulation in confirmed high-risk individuals, thereby mitigating unnecessary pharmacological intervention in low-risk populations.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"22"},"PeriodicalIF":6.0,"publicationDate":"2025-09-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12425853/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145041587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lightweight and mobile artificial intelligence and immersive technologies in aviation. 航空领域的轻量化移动人工智能和沉浸式技术。
IF 6 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-09-03 DOI: 10.1186/s42492-025-00203-z
Graham Wild, Aziida Nanyonga, Anam Iqbal, Shehar Bano, Alexander Somerville, Luke Pollock
{"title":"Lightweight and mobile artificial intelligence and immersive technologies in aviation.","authors":"Graham Wild, Aziida Nanyonga, Anam Iqbal, Shehar Bano, Alexander Somerville, Luke Pollock","doi":"10.1186/s42492-025-00203-z","DOIUrl":"10.1186/s42492-025-00203-z","url":null,"abstract":"<p><p>This review examines the current applications, benefits, challenges, and future potential of artificial intelligence (AI) and immersive aviation technologies. AI has been applied across various domains, including flight operations, air traffic control, maintenance, and ground handling. AI enhances aviation safety by enabling pilot assistance systems, mitigating human error, streamlining safety management systems, and aiding in accident analysis. Lightweight AI models are crucial for mobile applications in aviation, particularly for resource-constrained environments such as drones. Hardware considerations involve trade-offs between energy-efficient field-programmable gate arrays and power-consuming graphics processing units. Battery and thermal management are critical for mobile device applications. Although AI integration has numerous benefits, including enhanced safety, improved efficiency, and reduced environmental impact, it also presents challenges. Addressing algorithmic bias, ensuring cybersecurity, and managing the relationship between human operators and AI systems are crucial. The future of aviation will likely involve even more sophisticated AI algorithms, advanced hardware, and increased integration of AI with augmented reality and virtual reality, creating new possibilities for training and operations, and ultimately leading to a safer, more efficient, and more sustainable aviation industry.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"21"},"PeriodicalIF":6.0,"publicationDate":"2025-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12408884/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multimodal dynamic hierarchical clustering model for post-stroke cognitive impairment prediction. 脑卒中后认知障碍预测的多模态动态分层聚类模型。
IF 6 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-09-01 DOI: 10.1186/s42492-025-00202-0
Chen Bai, Tan Li, Yanyan Zheng, Gang Yuan, Jian Zheng, Hui Zhao
{"title":"Multimodal dynamic hierarchical clustering model for post-stroke cognitive impairment prediction.","authors":"Chen Bai, Tan Li, Yanyan Zheng, Gang Yuan, Jian Zheng, Hui Zhao","doi":"10.1186/s42492-025-00202-0","DOIUrl":"10.1186/s42492-025-00202-0","url":null,"abstract":"<p><p>Post-stroke cognitive impairment (PSCI) is a common and debilitating consequence of stroke that often arises from complex interactions between diverse brain alterations. The accurate early prediction of PSCI is critical for guiding personalized interventions. However, existing methods often struggle to capture complex structural disruptions and integrate multimodal information effectively. This study proposes the multimodal dynamic hierarchical clustering network (MDHCNet), a graph neural network designed for accurate and interpretable PSCI prediction. MDHCNet constructs brain graphs from diffusion-weighted imaging, magnetic resonance angiography, and T1- and T2-weighted images and integrates them with clinical features using a hierarchical cross-modal fusion module. Experimental results using a real-world stroke cohort demonstrated that MDHCNet consistently outperformed deep learning baselines. Ablation studies validated the benefits of multimodal fusion, while saliency-based interpretation highlighted discriminative brain regions associated with cognitive decline. These findings suggest that MDHCNet is an effective and explainable tool for early PSCI prediction, with the potential to support individualized clinical decision-making in stroke rehabilitation.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"20"},"PeriodicalIF":6.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12401840/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144972010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning radiomics of elastography for diagnosing compensated advanced chronic liver disease: an international multicenter study. 弹性成像的深度学习放射组学诊断代偿晚期慢性肝病:一项国际多中心研究。
IF 6 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-08-15 DOI: 10.1186/s42492-025-00199-6
Xue Lu, Haoyan Zhang, Hidekatsu Kuroda, Matteo Garcovich, Victor de Ledinghen, Ivica Grgurević, Runze Linghu, Hong Ding, Jiandong Chang, Min Wu, Cheng Feng, Xinping Ren, Changzhu Liu, Tao Song, Fankun Meng, Yao Zhang, Ye Fang, Sumei Ma, Jinfen Wang, Xiaolong Qi, Jie Tian, Xin Yang, Jie Ren, Ping Liang, Kun Wang
{"title":"Deep learning radiomics of elastography for diagnosing compensated advanced chronic liver disease: an international multicenter study.","authors":"Xue Lu, Haoyan Zhang, Hidekatsu Kuroda, Matteo Garcovich, Victor de Ledinghen, Ivica Grgurević, Runze Linghu, Hong Ding, Jiandong Chang, Min Wu, Cheng Feng, Xinping Ren, Changzhu Liu, Tao Song, Fankun Meng, Yao Zhang, Ye Fang, Sumei Ma, Jinfen Wang, Xiaolong Qi, Jie Tian, Xin Yang, Jie Ren, Ping Liang, Kun Wang","doi":"10.1186/s42492-025-00199-6","DOIUrl":"10.1186/s42492-025-00199-6","url":null,"abstract":"<p><p>Accurate, noninvasive diagnosis of compensated advanced chronic liver disease (cACLD) is essential for effective clinical management but remains challenging. This study aimed to develop a deep learning-based radiomics model using international multicenter data and to evaluate its performance by comparing it to the two-dimensional shear wave elastography (2D-SWE) cut-off method covering multiple countries or regions, etiologies, and ultrasound device manufacturers. This retrospective study included 1937 adult patients with chronic liver disease due to hepatitis B, hepatitis C, or metabolic dysfunction-associated steatotic liver disease. All patients underwent 2D-SWE imaging and liver biopsy at 17 centers across China, Japan, and Europe using devices from three manufacturers (SuperSonic Imagine, General Electric, and Mindray). The proposed generalized deep learning radiomics of elastography model integrated both elastographic images and liver stiffness measurements and was trained and tested on stratified internal and external datasets. A total of 1937 patients with 9472 2D-SWE images were included in the statistical analysis. Compared to 2D-SWE, the model achieved a higher area under the receiver operating characteristic curve (AUC) (0.89 vs 0.83, P = 0.025). It also achieved a highly consistent diagnosis across all subanalyses (P values: 0.21-0.91), whereas 2D-SWE exhibited different AUCs in the country or region (P < 0.001) and etiology (P = 0.005) subanalyses but not in the manufacturer subanalysis (P = 0.24). The model demonstrated more accurate and robust performance in noninvasive cACLD diagnosis than 2D-SWE across different countries or regions, etiologies, and manufacturers.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"19"},"PeriodicalIF":6.0,"publicationDate":"2025-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12354435/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144856587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Graph neural network-tracker: a graph neural network-based multi-sensor fusion framework for robust unmanned aerial vehicle tracking. 图神经网络跟踪器:一种基于图神经网络的多传感器融合框架,用于鲁棒无人机跟踪。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-07-16 DOI: 10.1186/s42492-025-00200-2
Karim Dabbabi, Tijeni Delleji
{"title":"Graph neural network-tracker: a graph neural network-based multi-sensor fusion framework for robust unmanned aerial vehicle tracking.","authors":"Karim Dabbabi, Tijeni Delleji","doi":"10.1186/s42492-025-00200-2","DOIUrl":"10.1186/s42492-025-00200-2","url":null,"abstract":"<p><p>Unmanned aerial vehicle (UAV) tracking is a critical task in surveillance, security, and autonomous navigation applications. In this study, we propose graph neural network-tracker (GNN-tracker), a novel GNN-based UAV tracking framework that effectively integrates graph-based spatial-temporal modelling, Transformer-based feature extraction, and multi-sensor fusion to enhance tracking robustness and accuracy. Unlike traditional tracking approaches, GNN-tracker dynamically constructs a spatiotemporal graph representation, improving identity consistency and reducing tracking errors under OCC-heavy scenarios. Experimental evaluations on optical, thermal, and fused UAV datasets demonstrate the superiority of GNN-tracker (fused) over state-of-the-art methods. The proposed model achieves multiple object tracking accuracy (MOTA) scores of 91.4% (fused), 89.1% (optical), and 86.3% (thermal), surpassing TransT by 8.9% in MOTA and 7.7% in higher order tracking accuracy (HOTA). The HOTA scores of 82.3% (fused), 80.1% (optical), and 78.7% (thermal) validate its strong object association capabilities, while its frames per second of 58.9 (fused), 56.8 (optical), and 54.3 (thermal) ensures real-time performance. Additionally, ablation studies confirm the essential role of graph-based modelling and multi-sensor fusion, with performance drops of up to 8.9% in MOTA when these components are removed. Thus, GNN-tracker (fused) offers a highly accurate, robust, and efficient UAV tracking solution, effectively addressing real-world challenges across diverse environmental conditions and multiple sensor modalities.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"18"},"PeriodicalIF":3.2,"publicationDate":"2025-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12267811/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144643753","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Placenta segmentation redefined: review of deep learning integration of magnetic resonance imaging and ultrasound imaging. 胎盘分割的重新定义:磁共振成像和超声成像的深度学习集成综述。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-07-15 DOI: 10.1186/s42492-025-00197-8
Asmaa Jittou, Khalid El Fazazy, Jamal Riffi
{"title":"Placenta segmentation redefined: review of deep learning integration of magnetic resonance imaging and ultrasound imaging.","authors":"Asmaa Jittou, Khalid El Fazazy, Jamal Riffi","doi":"10.1186/s42492-025-00197-8","DOIUrl":"10.1186/s42492-025-00197-8","url":null,"abstract":"<p><p>Placental segmentation is critical for the quantitative analysis of prenatal imaging applications. However, segmenting the placenta using magnetic resonance imaging (MRI) and ultrasound is challenging because of variations in fetal position, dynamic placental development, and image quality. Most segmentation methods define regions of interest with different shapes and intensities, encompassing the entire placenta or specific structures. Recently, deep learning has emerged as a key approach that offer high segmentation performance across diverse datasets. This review focuses on the recent advances in deep learning techniques for placental segmentation in medical imaging, specifically MRI and ultrasound modalities, and cover studies from 2019 to 2024. This review synthesizes recent research, expand knowledge in this innovative area, and highlight the potential of deep learning approaches to significantly enhance prenatal diagnostics. These findings emphasize the importance of selecting appropriate imaging modalities and model architectures tailored to specific clinical scenarios. In addition, integrating both MRI and ultrasound can enhance segmentation performance by leveraging complementary information. This review also discusses the challenges associated with the high costs and limited availability of advanced imaging technologies. It provides insights into the current state of placental segmentation techniques and their implications for improving maternal and fetal health outcomes, underscoring the transformative impact of deep learning on prenatal diagnostics.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"17"},"PeriodicalIF":3.2,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12263505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144638295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Active interaction strategy generation for human-robot collaboration based on trust. 基于信任的人机协作主动交互策略生成。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-06-23 DOI: 10.1186/s42492-025-00198-7
Yujie Guo, Pengfei Yi, Xiaopeng Wei, Dongsheng Zhou
{"title":"Active interaction strategy generation for human-robot collaboration based on trust.","authors":"Yujie Guo, Pengfei Yi, Xiaopeng Wei, Dongsheng Zhou","doi":"10.1186/s42492-025-00198-7","DOIUrl":"10.1186/s42492-025-00198-7","url":null,"abstract":"<p><p>In human-robot collaborative tasks, human trust in robots can reduce resistance to them, thereby increasing the success rate of task execution. However, most existing studies have focused on improving the success rate of human-robot collaboration (HRC) rather than on enhancing collaboration efficiency. To improve the overall collaboration efficiency while maintaining a high success rate, this study proposes an active interaction strategy generation for HRC based on trust. First, a trust-based optimal robot strategy generation method was proposed to generate the robot's optimal strategy in a HRC. This method employs a tree to model the HRC process under different robot strategies and calculates the optimal strategy based on the modeling results for the robot to execute. Second, the robot's performance was evaluated to calculate human's trust in a robot. A robot performance evaluation method based on a visual language model was also proposed. The evaluation results were input into the trust model to compute human's current trust. Finally, each time an object operation was completed, the robot's performance evaluation and optimal strategy generation methods worked together to automatically generate the optimal strategy of the robot for the next step until the entire collaborative task was completed. The experimental results demonstrates that this method significantly improve collaborative efficiency, achieving a high success rate in HRC.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"16"},"PeriodicalIF":3.2,"publicationDate":"2025-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12185789/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144477081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Avatars in the educational metaverse. 教育虚拟世界中的化身。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-06-10 DOI: 10.1186/s42492-025-00196-9
Md Zabirul Islam, Ge Wang
{"title":"Avatars in the educational metaverse.","authors":"Md Zabirul Islam, Ge Wang","doi":"10.1186/s42492-025-00196-9","DOIUrl":"10.1186/s42492-025-00196-9","url":null,"abstract":"<p><p>Avatars in the educational metaverse are revolutionizing the learning process by providing interactive and effective learning experiences. These avatars enable students to engage in realistic scenarios, work in groups, and develop essential skills using adaptive and intelligent technologies. The purpose of this review is to evaluate the contribution of avatars to education. It investigated the use of avatars to enhance learning by offering individualized experiences and supporting collaborative group activities in virtual environments. It also analyzed the recent progress in artificial intelligence, especially natural language processing and generative models, which have significantly improved avatar capabilities. In addition, it reviewed their use in customized learning, contextual teaching, and virtual simulations to improve student participation and achievement. This study also highlighted issues impacting its implementation, including data security, ethical concerns, and limited infrastructure. The paper ends with implications and recommendations for future research in this field.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"15"},"PeriodicalIF":3.2,"publicationDate":"2025-06-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12151956/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144259048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Radiographic prediction model based on X-rays predicting anterior cruciate ligament function in patients with knee osteoarthritis. 基于x线预测膝关节骨关节炎患者前交叉韧带功能的x线预测模型。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-06-06 DOI: 10.1186/s42492-025-00195-w
Guanghan Gao, Yaonan Zhang, Lei Shi, Lin Wang, Fei Wang, Qingyun Xue
{"title":"Radiographic prediction model based on X-rays predicting anterior cruciate ligament function in patients with knee osteoarthritis.","authors":"Guanghan Gao, Yaonan Zhang, Lei Shi, Lin Wang, Fei Wang, Qingyun Xue","doi":"10.1186/s42492-025-00195-w","DOIUrl":"10.1186/s42492-025-00195-w","url":null,"abstract":"<p><p>Knee osteoarthritis (KOA) is a prevalent chronic condition in the elderly and is often associated with instability caused by anterior cruciate ligament (ACL) degeneration. The functional integrity of ACL is crucial for the diagnosis and treatment of KOA. Radiographic imaging is a practical diagnostic tool for predicting the functional status of the ACL. However, the precision of the current evaluation methodologies remains suboptimal. Consequently, we aimed to identify additional radiographic features from X-ray images that could predict the ACL function in a larger cohort of patients with KOA. A retrospective analysis was conducted on 272 patients whose ACL function was verified intraoperatively between October 2021 and October 2024. The patients were categorized into ACL-functional and ACL-dysfunctional groups. Using least absolute shrinkage and selection operator regression and logistic regression, four significant radiographic predictors were identified: location of the deepest wear on the medial tibial plateau (middle and posterior), wear depth in the posterior third of the medial tibial plateau (> 1.40 mm), posterior tibial slope (PTS > 7.90°), and static anterior tibial translation (> 4.49 mm). A clinical prediction model was developed and visualized using a nomogram with calibration curves and receiver operating characteristic analysis to confirm the model performance. The prediction model demonstrated great discriminative ability, showing area under the curve values of 0.831 (88.4% sensitivity, 63.8% specificity) and 0.907 (86.1% sensitivity, 82.2% specificity) in the training and validation cohorts, respectively. Consequently, the authors established an efficient approach for accurate evaluation of ACL function in KOA patients.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"14"},"PeriodicalIF":3.2,"publicationDate":"2025-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12143998/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144235397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Artificial intelligence-assisted diagnosis of early allograft dysfunction based on ultrasound image and data. 基于超声图像和数据的早期异体移植物功能障碍的人工智能辅助诊断。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-05-12 DOI: 10.1186/s42492-025-00192-z
Yaqing Meng, Mingyang Wang, Ningning Niu, Haoyan Zhang, Jinghan Yang, Guoying Zhang, Jing Liu, Ying Tang, Kun Wang
{"title":"Artificial intelligence-assisted diagnosis of early allograft dysfunction based on ultrasound image and data.","authors":"Yaqing Meng, Mingyang Wang, Ningning Niu, Haoyan Zhang, Jinghan Yang, Guoying Zhang, Jing Liu, Ying Tang, Kun Wang","doi":"10.1186/s42492-025-00192-z","DOIUrl":"10.1186/s42492-025-00192-z","url":null,"abstract":"<p><p>Early allograft dysfunction (EAD) significantly affects liver transplantation prognosis. This study evaluated the effectiveness of artificial intelligence (AI)-assisted methods in accurately diagnosing EAD and identifying its causes. The primary metric for assessing the accuracy was the area under the receiver operating characteristic curve (AUC). Accuracy, sensitivity, and specificity were calculated and analyzed to compare the performance of the AI models with each other and with radiologists. EAD classification followed the criteria established by Olthoff et al. A total of 582 liver transplant patients who underwent transplantation between December 2012 and June 2021 were selected. Among these, 117 patients (mean age 33.5 ± 26.5 years, 80 men) were evaluated. The ultrasound parameters, images, and clinical information of patients were extracted from the database to train the AI model. The AUC for the ultrasound-spectrogram fusion network constructed from four ultrasound images and medical data was 0.968 (95%CI: 0.940, 0.991), outperforming radiologists by 30% for all metrics. AI assistance significantly improved diagnostic accuracy, sensitivity, and specificity (P < 0.050) for both experienced and less-experienced physicians. EAD lacks efficient diagnosis and causation analysis methods. The integration of AI and ultrasound enhances diagnostic accuracy and causation analysis. By modeling only images and data related to blood flow, the AI model effectively analyzed patients with EAD caused by abnormal blood supply. Our model can assist radiologists in reducing judgment discrepancies, potentially benefitting patients with EAD in underdeveloped regions. Furthermore, it enables targeted treatment for those with abnormal blood supply.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"13"},"PeriodicalIF":3.2,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12069173/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144004074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术官方微信