Christos Chadoulos, John Theocharis, Andreas Symeonidis, Serafeim Moustakidis
{"title":"Knee-cartilage segmentation from MR images using Multi-view Hypergraph Convolutional Neural Networks","authors":"Christos Chadoulos, John Theocharis, Andreas Symeonidis, Serafeim Moustakidis","doi":"10.1007/s10489-025-06808-4","DOIUrl":null,"url":null,"abstract":"<div><p>Leveraging the increased capacities of hypergraphs to model complex data structures, we propose in this article the Multi-view Hyper-Graph Convolutional Network <i>(MVHGCN)</i> to yield automated knee-joint cartilage segmentations from MRIs. The main properties of our approach are presented as follows: 1) Node features are obtained from multi-view <i>(MV)</i> acquisitions, corresponding to different feature extractors or image modalities. 2) Node embeddings are generated using a distributive <i>MV</i> convolution scheme which combines the various view-specific convolutions. These results are aggregated via an attention-based fusion module to automatically learn the weights of the different views. 3) Our model integrates both local and global level learning, simultaneously. Local hypergraph convolutions explore the relationships across the spatially aligned node libraries, while global hypergraph convolutions search for global affinities between nodes located at different positions within the image. 4) We propose two different blending schemes to combine local and global convolutions, namely, the cross-talk <i>(CT)</i> and the collaborative <i>(COL)</i> blending units, respectively. Using these units as building blocks, we construct the <i>MVHGCN</i> model, a deep network with enhanced feature representation and learning capabilities. The suggested segmentation method is evaluated on the publicly available Osteoarthritis Initiative <i>(OAI)</i> cohort. Specifically, we have designed a thorough experimental setup, including parameter sensitivity analysis and comparative results against a series of existing traditional methods, deep <i>CNN</i> models, and graph convolutional networks. The results show that <i>MVHGCN</i> outperforms the competing methods, achieving an overall cartilage segmentation score of <span>\\(\\mathcal {DSC} = 95.81\\%\\)</span> and <span>\\(\\mathcal {DSC} = 96.33\\%\\)</span>, for the <i>CT</i> and the <i>COL</i> blending, respectively.</p></div>","PeriodicalId":8041,"journal":{"name":"Applied Intelligence","volume":"55 13","pages":""},"PeriodicalIF":3.5000,"publicationDate":"2025-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://link.springer.com/content/pdf/10.1007/s10489-025-06808-4.pdf","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Applied Intelligence","FirstCategoryId":"94","ListUrlMain":"https://link.springer.com/article/10.1007/s10489-025-06808-4","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Leveraging the increased capacities of hypergraphs to model complex data structures, we propose in this article the Multi-view Hyper-Graph Convolutional Network (MVHGCN) to yield automated knee-joint cartilage segmentations from MRIs. The main properties of our approach are presented as follows: 1) Node features are obtained from multi-view (MV) acquisitions, corresponding to different feature extractors or image modalities. 2) Node embeddings are generated using a distributive MV convolution scheme which combines the various view-specific convolutions. These results are aggregated via an attention-based fusion module to automatically learn the weights of the different views. 3) Our model integrates both local and global level learning, simultaneously. Local hypergraph convolutions explore the relationships across the spatially aligned node libraries, while global hypergraph convolutions search for global affinities between nodes located at different positions within the image. 4) We propose two different blending schemes to combine local and global convolutions, namely, the cross-talk (CT) and the collaborative (COL) blending units, respectively. Using these units as building blocks, we construct the MVHGCN model, a deep network with enhanced feature representation and learning capabilities. The suggested segmentation method is evaluated on the publicly available Osteoarthritis Initiative (OAI) cohort. Specifically, we have designed a thorough experimental setup, including parameter sensitivity analysis and comparative results against a series of existing traditional methods, deep CNN models, and graph convolutional networks. The results show that MVHGCN outperforms the competing methods, achieving an overall cartilage segmentation score of \(\mathcal {DSC} = 95.81\%\) and \(\mathcal {DSC} = 96.33\%\), for the CT and the COL blending, respectively.
期刊介绍:
With a focus on research in artificial intelligence and neural networks, this journal addresses issues involving solutions of real-life manufacturing, defense, management, government and industrial problems which are too complex to be solved through conventional approaches and require the simulation of intelligent thought processes, heuristics, applications of knowledge, and distributed and parallel processing. The integration of these multiple approaches in solving complex problems is of particular importance.
The journal presents new and original research and technological developments, addressing real and complex issues applicable to difficult problems. It provides a medium for exchanging scientific research and technological achievements accomplished by the international community.