Visual Computing for Industry Biomedicine and Art最新文献

筛选
英文 中文
PCRFed: personalized federated learning with contrastive representation for non-independently and identically distributed medical image segmentation.
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-03-28 DOI: 10.1186/s42492-025-00191-0
Shengyuan Liu, Ruofan Zhang, Mengjie Fang, Hailin Li, Tianwang Xun, Zipei Wang, Wenting Shang, Jie Tian, Di Dong
{"title":"PCRFed: personalized federated learning with contrastive representation for non-independently and identically distributed medical image segmentation.","authors":"Shengyuan Liu, Ruofan Zhang, Mengjie Fang, Hailin Li, Tianwang Xun, Zipei Wang, Wenting Shang, Jie Tian, Di Dong","doi":"10.1186/s42492-025-00191-0","DOIUrl":"10.1186/s42492-025-00191-0","url":null,"abstract":"<p><p>Federated learning (FL) has shown great potential in addressing data privacy issues in medical image analysis. However, varying data distributions across different sites can create challenges in aggregating client models and achieving good global model performance. In this study, we propose a novel personalized contrastive representation FL framework, named PCRFed, which leverages contrastive representation learning to address the non-independent and identically distributed (non-IID) challenge and dynamically adjusts the distance between local clients and the global model to improve each client's performance without incurring additional communication costs. The proposed weighted model-contrastive loss provides additional regularization for local models, optimizing their respective distributions while effectively utilizing information from all clients to mitigate performance challenges caused by insufficient local data. The PCRFed approach was evaluated on two non-IID medical image segmentation datasets, and the results show that it outperforms several state-of-the-art FL frameworks, achieving higher single-client performance while ensuring privacy preservation and minimal communication costs. Our PCRFed framework can be adapted to various encoder-decoder segmentation network architectures and holds significant potential for advancing the use of FL in real-world medical applications. Based on a multi-center dataset, our framework demonstrates superior overall performance and higher single-client performance, achieving a 2.63% increase in the average Dice score for prostate segmentation.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"6"},"PeriodicalIF":3.2,"publicationDate":"2025-03-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11953490/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143735808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Principal component analysis and fine-tuned vision transformation integrating model explainability for breast cancer prediction.
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-03-10 DOI: 10.1186/s42492-025-00186-x
Huong Hoang Luong, Phuc Phan Hong, Dat Vo Minh, Thinh Nguyen Le Quang, Anh Dinh The, Nguyen Thai-Nghe, Hai Thanh Nguyen
{"title":"Principal component analysis and fine-tuned vision transformation integrating model explainability for breast cancer prediction.","authors":"Huong Hoang Luong, Phuc Phan Hong, Dat Vo Minh, Thinh Nguyen Le Quang, Anh Dinh The, Nguyen Thai-Nghe, Hai Thanh Nguyen","doi":"10.1186/s42492-025-00186-x","DOIUrl":"10.1186/s42492-025-00186-x","url":null,"abstract":"<p><p>Breast cancer, which is the most commonly diagnosed cancers among women, is a notable health issues globally. Breast cancer is a result of abnormal cells in the breast tissue growing out of control. Histopathology, which refers to the detection and learning of tissue diseases, has appeared as a solution for breast cancer treatment as it plays a vital role in its diagnosis and classification. Thus, considerable research on histopathology in medical and computer science has been conducted to develop an effective method for breast cancer treatment. In this study, a vision Transformer (ViT) was employed to classify tumors into two classes, benign and malignant, in the Breast Cancer Histopathological Database (BreakHis). To enhance the model performance, we introduced the novel multi-head locality large kernel self-attention during fine-tuning, achieving an accuracy of 95.94% at 100× magnification, thereby improving the accuracy by 3.34% compared to a standard ViT (which uses multi-head self-attention). In addition, the application of principal component analysis for dimensionality reduction led to an accuracy improvement of 3.34%, highlighting its role in mitigating overfitting and reducing the computational complexity. In the final phase, SHapley Additive exPlanations, Local Interpretable Model-agnostic Explanations, and Gradient-weighted Class Activation Mapping were used for the interpretability and explainability of machine-learning models, aiding in understanding the feature importance and local explanations, and visualizing the model attention. In another experiment, ensemble learning with VGGIN further boosted the performance to 97.13% accuracy. Our approach exhibited a 0.98% to 17.13% improvement in accuracy compared with state-of-the-art methods, establishing a new benchmark for breast cancer histopathological image classification.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"5"},"PeriodicalIF":3.2,"publicationDate":"2025-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11893953/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143598058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Global residual stress field inference method for die-forging structural parts based on fusion of monitoring data and distribution prior.
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-03-06 DOI: 10.1186/s42492-025-00187-w
Shuyuan Chen, Yingguang Li, Changqing Liu, Zhiwei Zhao, Zhibin Chen, Xiao Liu
{"title":"Global residual stress field inference method for die-forging structural parts based on fusion of monitoring data and distribution prior.","authors":"Shuyuan Chen, Yingguang Li, Changqing Liu, Zhiwei Zhao, Zhibin Chen, Xiao Liu","doi":"10.1186/s42492-025-00187-w","DOIUrl":"10.1186/s42492-025-00187-w","url":null,"abstract":"<p><p>Die-forging structural parts are widely used in the main load-bearing components of aircrafts because of their excellent mechanical properties and fatigue resistance. However, the forming and heat treatment processes of die-forging structural parts are complex, leading to high levels of internal stress and a complex distribution of residual stress fields (RSFs), which affect the deformation, fatigue life, and failure of structural parts throughout their lifecycles. Hence, the global RSF can provide the basis for process control. The existing RSF inference method based on deformation force data can utilize monitoring data to infer the global RSF of a regular part. However, owing to the irregular geometry of die-forging structural parts and the complexity of the RSF, it is challenging to solve ill-conditioned problems during the inference process, which makes it difficult to obtain the RSF accurately. This paper presents a global RSF inference method for the die-forging structural parts based on the fusion of monitoring data and distribution prior. Prior knowledge was derived from the RSF distribution trends obtained through finite element analysis. This enables the low-dimensional characterization of the RSF, reducing the number of parameters required to solve the equations. The effectiveness of this method was validated in both simulation and actual environments.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"4"},"PeriodicalIF":3.2,"publicationDate":"2025-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11885777/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143568451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Explainable machine learning framework for cataracts recognition using visual features. 使用视觉特征识别白内障的可解释机器学习框架。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-01-17 DOI: 10.1186/s42492-024-00183-6
Xiao Wu, Lingxi Hu, Zunjie Xiao, Xiaoqing Zhang, Risa Higashita, Jiang Liu
{"title":"Explainable machine learning framework for cataracts recognition using visual features.","authors":"Xiao Wu, Lingxi Hu, Zunjie Xiao, Xiaoqing Zhang, Risa Higashita, Jiang Liu","doi":"10.1186/s42492-024-00183-6","DOIUrl":"10.1186/s42492-024-00183-6","url":null,"abstract":"<p><p>Cataract is the leading ocular disease of blindness and visual impairment globally. Deep neural networks (DNNs) have achieved promising cataracts recognition performance based on anterior segment optical coherence tomography (AS-OCT) images; however, they have poor explanations, limiting their clinical applications. In contrast, visual features extracted from original AS-OCT images and their transform forms (e.g., AS-OCT-based histograms) have good explanations but have not been fully exploited. Motivated by these observations, an explainable machine learning framework to recognize cataracts severity levels automatically using AS-OCT images was proposed, consisting of three stages: visual feature extraction, feature importance explanation and selection, and recognition. First, the intensity histogram and intensity-based statistical methods are applied to extract visual features from original AS-OCT images and AS-OCT-based histograms. Subsequently, the SHapley Additive exPlanations and Pearson correlation coefficient methods are applied to analyze the feature importance and select significant visual features. Finally, an ensemble multi-class ridge regression method is applied to recognize the cataracts severity levels based on the selected visual features. Experiments on a clinical AS-OCT-NC dataset demonstrate that the proposed framework not only achieves competitive performance through comparisons with DNNs, but also has a good explanation ability, meeting the requirements of clinical diagnostic practice.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"3"},"PeriodicalIF":3.2,"publicationDate":"2025-01-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11748710/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143012990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Harmonized technical standard test methods for quality evaluation of medical fluorescence endoscopic imaging systems. 医用荧光内窥镜成像系统质量评价的协调技术标准试验方法。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-01-10 DOI: 10.1186/s42492-024-00184-5
Bodong Liu, Zhaojun Guo, Pengfei Yang, Jian'an Ye, Kunshan He, Shen Gao, Chongwei Chi, Yu An, Jie Tian
{"title":"Harmonized technical standard test methods for quality evaluation of medical fluorescence endoscopic imaging systems.","authors":"Bodong Liu, Zhaojun Guo, Pengfei Yang, Jian'an Ye, Kunshan He, Shen Gao, Chongwei Chi, Yu An, Jie Tian","doi":"10.1186/s42492-024-00184-5","DOIUrl":"10.1186/s42492-024-00184-5","url":null,"abstract":"<p><p>Fluorescence endoscopy technology utilizes a light source of a specific wavelength to excite the fluorescence signals of biological tissues. This capability is extremely valuable for the early detection and precise diagnosis of pathological changes. Identifying a suitable experimental approach and metric for objectively and quantitatively assessing the imaging quality of fluorescence endoscopy is imperative to enhance the image evaluation criteria of fluorescence imaging technology. In this study, we propose a new set of standards for fluorescence endoscopy technology to evaluate the optical performance and image quality of fluorescence imaging objectively and quantitatively. This comprehensive set of standards encompasses fluorescence test models and imaging quality assessment protocols to ensure that the performance of fluorescence endoscopy systems meets the required standards. In addition, it aims to enhance the accuracy and uniformity of the results by standardizing testing procedures. The formulation of pivotal metrics and testing methodologies is anticipated to facilitate direct quantitative comparisons of the performance of fluorescence endoscopy devices. This advancement is expected to foster the harmonization of clinical and preclinical evaluations using fluorescence endoscopy imaging systems, thereby improving diagnostic precision and efficiency.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"2"},"PeriodicalIF":3.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11723869/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956034","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing breast cancer diagnosis: token vision transformers for faster and accurate classification of histopathology images. 推进乳腺癌诊断:标记视觉变压器更快,更准确地分类组织病理图像。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2025-01-08 DOI: 10.1186/s42492-024-00181-8
Mouhamed Laid Abimouloud, Khaled Bensid, Mohamed Elleuch, Mohamed Ben Ammar, Monji Kherallah
{"title":"Advancing breast cancer diagnosis: token vision transformers for faster and accurate classification of histopathology images.","authors":"Mouhamed Laid Abimouloud, Khaled Bensid, Mohamed Elleuch, Mohamed Ben Ammar, Monji Kherallah","doi":"10.1186/s42492-024-00181-8","DOIUrl":"10.1186/s42492-024-00181-8","url":null,"abstract":"<p><p>The vision transformer (ViT) architecture, with its attention mechanism based on multi-head attention layers, has been widely adopted in various computer-aided diagnosis tasks due to its effectiveness in processing medical image information. ViTs are notably recognized for their complex architecture, which requires high-performance GPUs or CPUs for efficient model training and deployment in real-world medical diagnostic devices. This renders them more intricate than convolutional neural networks (CNNs). This difficulty is also challenging in the context of histopathology image analysis, where the images are both limited and complex. In response to these challenges, this study proposes a TokenMixer hybrid-architecture that combines the strengths of CNNs and ViTs. This hybrid architecture aims to enhance feature extraction and classification accuracy with shorter training time and fewer parameters by minimizing the number of input patches employed during training, while incorporating tokenization of input patches using convolutional layers and encoder transformer layers to process patches across all network layers for fast and accurate breast cancer tumor subtype classification. The TokenMixer mechanism is inspired by the ConvMixer and TokenLearner models. First, the ConvMixer model dynamically generates spatial attention maps using convolutional layers, enabling the extraction of patches from input images to minimize the number of input patches used in training. Second, the TokenLearner model extracts relevant regions from the selected input patches, tokenizes them to improve feature extraction, and trains all tokenized patches in an encoder transformer network. We evaluated the TokenMixer model on the BreakHis public dataset, comparing it with ViT-based and other state-of-the-art methods. Our approach achieved impressive results for both binary and multi-classification of breast cancer subtypes across various magnification levels (40×, 100×, 200×, 400×). The model demonstrated accuracies of 97.02% for binary classification and 93.29% for multi-classification, with decision times of 391.71 and 1173.56 s, respectively. These results highlight the potential of our hybrid deep ViT-CNN architecture for advancing tumor classification in histopathological images. The source code is accessible: https://github.com/abimouloud/TokenMixer .</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"8 1","pages":"1"},"PeriodicalIF":3.2,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11711433/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142956033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Semi-supervised contour-driven broad learning system for autonomous segmentation of concealed prohibited baggage items. 隐藏违禁行李物品自主分割的半监督轮廓驱动广义学习系统。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-12-24 DOI: 10.1186/s42492-024-00182-7
Divya Velayudhan, Abdelfatah Ahmed, Taimur Hassan, Muhammad Owais, Neha Gour, Mohammed Bennamoun, Ernesto Damiani, Naoufel Werghi
{"title":"Semi-supervised contour-driven broad learning system for autonomous segmentation of concealed prohibited baggage items.","authors":"Divya Velayudhan, Abdelfatah Ahmed, Taimur Hassan, Muhammad Owais, Neha Gour, Mohammed Bennamoun, Ernesto Damiani, Naoufel Werghi","doi":"10.1186/s42492-024-00182-7","DOIUrl":"10.1186/s42492-024-00182-7","url":null,"abstract":"<p><p>With the exponential rise in global air traffic, ensuring swift passenger processing while countering potential security threats has become a paramount concern for aviation security. Although X-ray baggage monitoring is now standard, manual screening has several limitations, including the propensity for errors, and raises concerns about passenger privacy. To address these drawbacks, researchers have leveraged recent advances in deep learning to design threat-segmentation frameworks. However, these models require extensive training data and labour-intensive dense pixel-wise annotations and are finetuned separately for each dataset to account for inter-dataset discrepancies. Hence, this study proposes a semi-supervised contour-driven broad learning system (BLS) for X-ray baggage security threat instance segmentation referred to as C-BLX. The research methodology involved enhancing representation learning and achieving faster training capability to tackle severe occlusion and class imbalance using a single training routine with limited baggage scans. The proposed framework was trained with minimal supervision using resource-efficient image-level labels to localize illegal items in multi-vendor baggage scans. More specifically, the framework generated candidate region segments from the input X-ray scans based on local intensity transition cues, effectively identifying concealed prohibited items without entire baggage scans. The multi-convolutional BLS exploits the rich complementary features extracted from these region segments to predict object categories, including threat and benign classes. The contours corresponding to the region segments predicted as threats were then utilized to yield the segmentation results. The proposed C-BLX system was thoroughly evaluated on three highly imbalanced public datasets and surpassed other competitive approaches in baggage-threat segmentation, yielding 90.04%, 78.92%, and 59.44% in terms of mIoU on GDXray, SIXray, and Compass-XP, respectively. Furthermore, the limitations of the proposed system in extracting precise region segments in intricate noisy settings and potential strategies for overcoming them through post-processing techniques were explored (source code will be available at https://github.com/Divs1159/CNN_BLS .).</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"7 1","pages":"30"},"PeriodicalIF":3.2,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11666859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883119","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Energy consumption forecasting for laser manufacturing of large artifacts based on fusionable transfer learning. 基于可融合迁移学习的大型工件激光加工能耗预测。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-12-02 DOI: 10.1186/s42492-024-00178-3
Linxuan Wang, Jinghua Xu, Shuyou Zhang, Jianrong Tan, Shaomei Fei, Xuezhi Shi, Jihong Pang, Sheng Luo
{"title":"Energy consumption forecasting for laser manufacturing of large artifacts based on fusionable transfer learning.","authors":"Linxuan Wang, Jinghua Xu, Shuyou Zhang, Jianrong Tan, Shaomei Fei, Xuezhi Shi, Jihong Pang, Sheng Luo","doi":"10.1186/s42492-024-00178-3","DOIUrl":"10.1186/s42492-024-00178-3","url":null,"abstract":"<p><p>This study presents an energy consumption (EC) forecasting method for laser melting manufacturing of metal artifacts based on fusionable transfer learning (FTL). To predict the EC of manufacturing products, particularly from scale-down to scale-up, a general paradigm was first developed by categorizing the overall process into three main sub-steps. The operating electrical power was further formulated as a combinatorial function, based on which an operator learning network was adopted to fit the nonlinear relations between the fabricating arguments and EC. Parallel-arranged networks were constructed to investigate the impacts of fabrication variables and devices on power. Considering the interconnections among these factors, the outputs of the neural networks were blended and fused to jointly predict the electrical power. Most innovatively, large artifacts can be decomposed into time-dependent laser-scanning trajectories, which can be further transformed into fusionable information via neural networks, inspired by large language model. Accordingly, transfer learning can deal with either scale-down or scale-up forecasting, namely, FTL with scalability within artifact structures. The effectiveness of the proposed FTL was verified through physical fabrication experiments via laser powder bed fusion. The relative error of the average and overall EC predictions based on FTL was maintained below 0.83%. The melting fusion quality was examined using metallographic diagrams. The proposed FTL framework can forecast the EC of scaled structures, which is particularly helpful in price estimation and quotation of large metal products towards carbon peaking and carbon neutrality.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"7 1","pages":"29"},"PeriodicalIF":3.2,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11612079/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142772951","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Computational analysis of variability and uncertainty in the clinical reference on magnetic resonance imaging radiomics: modelling and performance. 磁共振成像放射组学临床参考文献中变异性和不确定性的计算分析:建模与性能。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-11-19 DOI: 10.1186/s42492-024-00180-9
Cindy Xue, Jing Yuan, Gladys G Lo, Darren M C Poon, Winnie Cw Chu
{"title":"Computational analysis of variability and uncertainty in the clinical reference on magnetic resonance imaging radiomics: modelling and performance.","authors":"Cindy Xue, Jing Yuan, Gladys G Lo, Darren M C Poon, Winnie Cw Chu","doi":"10.1186/s42492-024-00180-9","DOIUrl":"10.1186/s42492-024-00180-9","url":null,"abstract":"<p><p>To conduct a computational investigation to explore the influence of clinical reference uncertainty on magnetic resonance imaging (MRI) radiomics feature selection, modelling, and performance. This study used two sets of publicly available prostate cancer MRI = radiomics data (Dataset 1: n = 260; Dataset 2: n = 100) with Gleason score clinical references. Each dataset was divided into training and holdout testing datasets at a ratio of 7:3 and analysed independently. The clinical references of the training set were permuted at different levels (increments of 5%) and repeated 20 times. Four feature selection algorithms and two classifiers were used to construct the models. Cross-validation was employed for training, while a separate hold-out testing set was used for evaluation. The Jaccard similarity coefficient was used to evaluate feature selection, while the area under the curve (AUC) and accuracy were used to assess model performance. An analysis of variance test with Bonferroni correction was conducted to compare the metrics of each model. The consistency of the feature selection performance decreased substantially with the clinical reference permutation. AUCs of the trained models with permutation particularly after 20% were significantly lower (Dataset 1 (with ≥ 20% permutation): 0.67, and Dataset 2 (≥ 20% permutation): 0.74), compared to the AUC of models without permutation (Dataset 1: 0.94, Dataset 2: 0.97). The performances of the models were also associated with larger uncertainties and an increasing number of permuted clinical references. Clinical reference uncertainty can substantially influence MRI radiomic feature selection and modelling. The high accuracy of clinical references should be helpful in building reliable and robust radiomic models. Careful interpretation of the model performance is necessary, particularly for high-dimensional data.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"7 1","pages":"28"},"PeriodicalIF":3.2,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11573982/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Survey of real-time brainmedia in artistic exploration. 艺术探索中的实时脑媒体调查。
IF 3.2 4区 计算机科学
Visual Computing for Industry Biomedicine and Art Pub Date : 2024-11-18 DOI: 10.1186/s42492-024-00179-2
Rem RunGu Lin, Kang Zhang
{"title":"Survey of real-time brainmedia in artistic exploration.","authors":"Rem RunGu Lin, Kang Zhang","doi":"10.1186/s42492-024-00179-2","DOIUrl":"10.1186/s42492-024-00179-2","url":null,"abstract":"<p><p>This survey examines the evolution and impact of real-time brainmedia on artistic exploration, contextualizing developments within a historical framework. To enhance knowledge on the entanglement between the brain, mind, and body in an increasingly mediated world, this work defines a clear scope at the intersection of bio art and interactive art, concentrating on real-time brainmedia artworks developed in the 21st century. It proposes a set of criteria and a taxonomy based on historical notions, interaction dynamics, and media art representations. The goal is to provide a comprehensive overview of real-time brainmedia, setting the stage for future explorations of new paradigms in communication between humans, machines, and the environment.</p>","PeriodicalId":29931,"journal":{"name":"Visual Computing for Industry Biomedicine and Art","volume":"7 1","pages":"27"},"PeriodicalIF":3.2,"publicationDate":"2024-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11570570/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142649143","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信