{"title":"Biomechanical evaluation of the effects of thread parameters on dental implant stability: a systematic review.","authors":"Masoud Arabbeiki, Mohammad Reza Niroomand","doi":"10.1007/s11517-025-03367-1","DOIUrl":"https://doi.org/10.1007/s11517-025-03367-1","url":null,"abstract":"<p><p>The threads of dental implants are critical components that transfer occlusal loads to the surrounding bone. The appropriate size of thread parameters can influence the stability of the implant after implantation. Despite several research studies on the effectiveness of implant thread parameters, there is limited structured information available. This study aims to conduct a systematic review to evaluate the biomechanical effects of thread parameters, namely, thread depth, thread width, thread pitch, and thread angle on implant stability. A comprehensive literature review was conducted in PubMed/MEDLINE, Scopus, ScienceDirect, and Web of Science for research published in English in the last two decades according to the PRISMA protocols. The extracted data were organized in the following order: area, bone layers, bone type, implant design, implant material, failure criteria/unit, loading type, statistical analysis/optimization, experimental validation, convergence analysis, boundary conditions, parts of the Finite Element Model, studied variables, and main findings. The search yielded 580 records, with 39 studies meeting the selection criteria and being chosen for the review. All four thread parameters were found to affect the stress and strain distribution in cancellous and cortical bones. Thread pitch and depth are more important for implant primary stability as they are directly correlated with the functional surface area between the implant and bone. Moreover, thread pitch, depth, and width can increase the insertion torque, which is favorable for implant primary stability, especially in low-quality bones. The thread angle can also direct occlusal forces to the bone more smoothly to prevent bone overloading and destructive shear stresses, which cause bone resorption. This structured review provides valuable insights into the biomechanical effects of thread parameters on implant stability.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144037367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A deep learning model with interpretable squeeze-and-excitation for automated rehabilitation exercise assessment.","authors":"Md Johir Raihan, Md Atiqur Rahman Ahad, Abdullah-Al Nahid","doi":"10.1007/s11517-025-03372-4","DOIUrl":"https://doi.org/10.1007/s11517-025-03372-4","url":null,"abstract":"<p><p>Rehabilitation exercises are critical for recovering from motor dysfunction caused by neurological conditions like stroke, back pain, Parkinson's disease, and spinal cord injuries. Traditionally, these exercises require constant monitoring by therapists, which is time-consuming and costly, often leading to therapist shortages. This paper introduces a deep learning model, convolutional neural network - squeeze excitation (CNN-SE), to automate rehabilitation exercise assessment. By optimizing its parameters with the grey wolf optimization algorithm, the model was fine-tuned for optimal performance. The model's effectiveness was tested on both healthy and unhealthy participants with motor dysfunction, providing a comprehensive evaluation of its capabilities. To interpret the model's decisions and understand its inner workings, we employed Shapley additive explanations (SHAP) to analyze feature importance at each time step. Our CNN-SE model achieved a state-of-the-art mean absolute deviation of 0.127 on the KIMORE dataset and a comparable MAD of 0.014 on the UI-PRMD dataset across various exercises, demonstrating its potential to provide a cost-effective, efficient alternative to traditional therapist-led evaluations.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144037342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep learning-based automatic cranial implant design through direct defect shape prediction and its comparison study.","authors":"Afaque Rafique Memon, Haochen Shi, Tarique Rafique Memon, Jan Egger, Xiaojun Chen","doi":"10.1007/s11517-025-03363-5","DOIUrl":"https://doi.org/10.1007/s11517-025-03363-5","url":null,"abstract":"<p><p>Defects to human crania are one kind of head bone damages, and cranial implants can be used to repair the defected crania. The automation of the implant design process is crucial in reducing the corresponding therapy time. Taking the cranial implant design problem as a special kind of shape completion task, an automatic cranial implant design workflow is proposed, which consists of a deep neural network for the direct shape prediction of the missing part of the defective cranium and conventional post-processing steps to refine the automatically generated implant. To evaluate the proposed workflow, we employ cross-validation and report an average Dice Similarity Score and boundary Dice Similarity Score of 0.81 and 0.81, respectively. We also measure the surface distance error using the 95th quantile of the Hausdorff Distance, which yields an average of 3.01 mm. Comparison with the manual cranial implant design procedure also revealed the convenience of the proposed workflow. In addition, a plugin is developed for 3D Slicer, which implements the proposed automatic cranial implant design workflow and can facilitate the end-users.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143992388","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pradipta Sasmal, Susant Kumar Panigrahi, Swarna Laxmi Panda, M K Bhuyan
{"title":"Attention-guided deep framework for polyp localization and subsequent classification via polyp local and Siamese feature fusion.","authors":"Pradipta Sasmal, Susant Kumar Panigrahi, Swarna Laxmi Panda, M K Bhuyan","doi":"10.1007/s11517-025-03369-z","DOIUrl":"https://doi.org/10.1007/s11517-025-03369-z","url":null,"abstract":"<p><p>Colorectal cancer (CRC) is one of the leading causes of death worldwide. This paper proposes an automated diagnostic technique to detect, localize, and classify polyps in colonoscopy video frames. The proposed model adopts the deep YOLOv4 model that incorporates both spatial and contextual information in the form of spatial attention and channel attention blocks, respectively for better localization of polyps. Finally, leveraging a fusion of deep and handcrafted features, the detected polyps are classified as adenoma or non-adenoma. Polyp shape and texture are essential features in discriminating polyp types. Therefore, the proposed work utilizes a pyramid histogram of oriented gradient (PHOG) and embedding features learned via triplet Siamese architecture to extract these features. The PHOG extracts local shape information from each polyp class, whereas the Siamese network extracts intra-polyp discriminating features. The individual and cross-database performances on two databases suggest the robustness of our method in polyp localization. The competitive analysis based on significant clinical parameters with current state-of-the-art methods confirms that our method can be used for automated polyp localization in both real-time and offline colonoscopic video frames. Our method provides an average precision of 0.8971 and 0.9171 and an F1 score of 0.8869 and 0.8812 for the Kvasir-SEG and SUN databases. Similarly, the proposed classification framework for the detected polyps yields a classification accuracy of 96.66% on a publicly available UCI colonoscopy video dataset. Moreover, the classification framework provides an F1 score of 96.54% that validates the potential of the proposed framework in polyp localization and classification.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":""},"PeriodicalIF":2.6,"publicationDate":"2025-05-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144057224","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Evaluation of a cognition-sensitive spatial virtual reality game for Alzheimer's disease.","authors":"Rashmita Chatterjee, Zahra Moussavi","doi":"10.1007/s11517-024-03270-1","DOIUrl":"10.1007/s11517-024-03270-1","url":null,"abstract":"<p><p>Spatial impairment characterizes Alzheimer's disease (AD) from its earliest stages. We present the design and preliminary evaluation of \"Barn Ruins,\" a serious virtual reality (VR) wayfinding game for early-stage AD. Barn Ruins is tailored to the cognitive abilities of this population, featuring simple controls and error-based scoring system. Ten younger adults, ten cognitively healthy older adults, and ten age-matched individuals with AD participated in this study. They underwent cognitive assessments using the Montreal Cognitive Assessment (MoCA) and the Montgomery-Åsberg Depression Rating Scale (MADRS) before gameplay. The game involves navigating a virtual environment to find a target room, with increasing levels of difficulty. This study aimed to confirm the cognitive sensitivity of the Barn Ruins' spatial learning score by studying its relationship with Montreal Cognitive Assessment (MoCA) scores. MoCA scores and spatial learning scores had a correlation coefficient of 0.755 (p < 0.001). Logistic regression further revealed that higher spatial learning scores significantly predicted lower odds of cognitive impairment (OR = 0.495, 95% CI [0.274, 0.746], p < 0.005). The initial results suggest that the game is effective in differentiating performance among participant groups. This research demonstrates the potential of the Barn Ruins game as an innovative tool for assessing spatial navigation in AD, highlighting areas for future validation and investigation as a training tool.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1355-1365"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automated measurement of cardiothoracic ratio based on semantic segmentation integration model using deep learning.","authors":"Jiajun Feng, Yuqian Huang, Zhenbin Hu, Junjie Guo","doi":"10.1007/s11517-024-03263-0","DOIUrl":"10.1007/s11517-024-03263-0","url":null,"abstract":"<p><p>The objective of this study is to investigate the efficacy of the semantic segmentation model in predicting cardiothoracic ratio (CTR) and heart enlargement and compare its consistency with the reference standard. A total of 650 consecutive chest radiographs from our center and 756 public datasets were retrospectively included to develop a segmentation model. Three semantic segmentation models were used to segment the heart and lungs. A soft voting integration method was used to improve the segmentation accuracy and measure CTR automatically. Bland-Altman and Pearson's correlation analyses were used to compare the consistency and correlation between CTR automated measurements and reference standards. CTR automated measurements were compared with reference standard using the Wilcoxon signed-rank test. The diagnostic efficacy of the model for heart enlargement was evaluated using the AUC. The soft voting integration model was strongly correlated (r = 0.98, P < 0.001) and consistent (average standard deviation of 0.0048 cm/s) with the reference standard. No statistical difference between CTR automated measurement and reference standard in healthy subjects, pneumothorax, pleural effusion, and lung mass patients (P > 0.05). In the external test data, the accuracy, sensitivity, specificity, and AUC in determining heart enlargement were 96.0%, 79.5%, 99.1%, and 0.988, respectively. The deep learning method was calculated faster per chest radiograph than the average time manually calculated by the radiologist (about 2 s vs 25.75 ± 4.35 s, respectively, P < 0.001). This study provides a semantic segmentation integration model of chest radiographs to measure CTR and determine heart enlargement with chest structure changes due to different chest diseases effectively, faster, and accurately. The development of the automated segmentation integration model is helpful in improving the consistency of CTR measurement, reducing the workload of radiologists, and improving their work efficiency.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1343-1353"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142872615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bingchen Li, Qiming He, Jing Chang, Bo Yang, Xi Tang, Yonghong He, Tian Guan, Guangde Zhou
{"title":"Toward efficient slide-level grading of liver biopsy via explainable deep learning framework.","authors":"Bingchen Li, Qiming He, Jing Chang, Bo Yang, Xi Tang, Yonghong He, Tian Guan, Guangde Zhou","doi":"10.1007/s11517-024-03266-x","DOIUrl":"10.1007/s11517-024-03266-x","url":null,"abstract":"<p><p>In the context of chronic liver diseases, where variability in progression necessitates early and precise diagnosis, this study addresses the limitations of traditional histological analysis and the shortcomings of existing deep learning approaches. A novel patch-level classification model employing multi-scale feature extraction and fusion was developed to enhance the grading accuracy and interpretability of liver biopsies, analyzing 1322 cases across various staining methods. The study also introduces a slide-level aggregation framework, comparing different diagnostic models, to efficiently integrate local histological information. Results from extensive validation show that the slide-level model consistently achieved high F1 scores, notably 0.9 for inflammatory activity and steatosis, and demonstrated rapid diagnostic capabilities with less than one minute per slide on average. The patch-level model also performed well, with an F1 score of 0.64 for ballooning and 0.99 for other indicators, and proved transferable to public datasets. The conclusion drawn is that the proposed analytical framework offers a reliable basis for the diagnosis and treatment of chronic liver diseases, with the added benefit of robust interpretability, suggesting its practical utility in clinical settings.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1435-1449"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142980407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Alessio Romanelli, Michaela Servi, Francesco Buonamici, Yary Volpe
{"title":"Automatic positioning of cutting planes for bone tumor resection surgery.","authors":"Alessio Romanelli, Michaela Servi, Francesco Buonamici, Yary Volpe","doi":"10.1007/s11517-024-03281-y","DOIUrl":"10.1007/s11517-024-03281-y","url":null,"abstract":"<p><p>In bone tumor resection surgery, patient-specific cutting guides aid the surgeon in the resection of a precise part of the bone. Despite the use of automation methodologies in surgical guide modeling, to date, the placement of cutting planes is a manual task. This work presents an algorithm for the automatic positioning of cutting planes to reduce healthy bone resected and thus improve post-operative outcomes. The algorithm uses particle swarm optimization to search for the optimal positioning of points defining a cutting surface composed of planes parallel to a surgical approach direction. The quality of a cutting surface is evaluated by an objective function that considers two key variables: the volumes of healthy bone resected and tumor removed. The algorithm was tested on three tumor cases in long bone epiphyses (two tibial, one humeral) with varying plane numbers. Optimal optimization parameters were determined, with varying parameters through iterations providing lower mean and standard deviation of the objective function. Initializing particle swarm optimization with a plausible cutting surface configuration further improved stability and minimized healthy bone resection. Future work is required to reach 3D optimization of the planes positioning, further improving the solution.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1521-1534"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12064637/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143015121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Performance investigation of MVMD-MSI algorithm in frequency recognition for SSVEP-based brain-computer interface and its application in robotic arm control.","authors":"Rongrong Fu, Shaoxiong Niu, Xiaolei Feng, Ye Shi, Chengcheng Jia, Jing Zhao, Guilin Wen","doi":"10.1007/s11517-024-03236-3","DOIUrl":"10.1007/s11517-024-03236-3","url":null,"abstract":"<p><p>This study focuses on improving the performance of steady-state visual evoked potential (SSVEP) in brain-computer interfaces (BCIs) for robotic control systems. The challenge lies in effectively reducing the impact of artifacts on raw data to enhance the performance both in quality and reliability. The proposed MVMD-MSI algorithm combines the advantages of multivariate variational mode decomposition (MVMD) and multivariate synchronization index (MSI). Compared to widely used algorithms, the novelty of this method is its capability of decomposing nonlinear and non-stationary EEG signals into intrinsic mode functions (IMF) across different frequency bands with the best center frequency and bandwidth. Therefore, SSVEP decoding performance can be improved by this method, and the effectiveness of MVMD-MSI is evaluated by the robot with 6 degrees-of-freedom. Offline experiments were conducted to optimize the algorithm's parameters, resulting in significant improvements. Additionally, the algorithm showed good performance even with fewer channels and shorter data lengths. In online experiments, the algorithm achieved an average accuracy of 98.31% at 1.8 s, confirming its feasibility and effectiveness for real-time SSVEP BCI-based robotic arm applications. The MVMD-MSI algorithm, as proposed, represents a significant advancement in SSVEP analysis for robotic control systems. It enhances decoding performance and shows promise for practical application in this field.</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1367-1381"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142899973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qingrun Zeng, Lin Yang, Yongqiang Li, Lei Xie, Yuanjing Feng
{"title":"RGVPSeg: multimodal information fusion network for retinogeniculate visual pathway segmentation.","authors":"Qingrun Zeng, Lin Yang, Yongqiang Li, Lei Xie, Yuanjing Feng","doi":"10.1007/s11517-024-03248-z","DOIUrl":"10.1007/s11517-024-03248-z","url":null,"abstract":"<p><p>The segmentation of the retinogeniculate visual pathway (RGVP) enables quantitative analysis of its anatomical structure. Multimodal learning has exhibited considerable potential in segmenting the RGVP based on structural MRI (sMRI) and diffusion MRI (dMRI). However, the intricate nature of the skull base environment and the slender morphology of the RGVP pose challenges for existing methodologies to adequately leverage the complementary information from each modality. In this study, we propose a multimodal information fusion network designed to optimize and select the complementary information across multiple modalities: the T1-weighted (T1w) images, the fractional anisotropy (FA) images, and the fiber orientation distribution function (fODF) peaks, and the modalities can supervise each other during the process. Specifically, we add a supervised master-assistant cross-modal learning framework between the encoder layers of different modalities so that the characteristics of different modalities can be more fully utilized to achieve a more accurate segmentation result. We apply RGVPSeg to an MRI dataset with 102 subjects from the Human Connectome Project (HCP) and 10 subjects from Multi-shell Diffusion MRI (MDM), the experimental results show promising results, which demonstrate that the proposed framework is feasible and outperforms the methods mentioned in this paper. Our code is freely available at https://github.com/yanglin9911/RGVPSeg .</p>","PeriodicalId":49840,"journal":{"name":"Medical & Biological Engineering & Computing","volume":" ","pages":"1397-1411"},"PeriodicalIF":2.6,"publicationDate":"2025-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142915995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}