Rao Khan, Robabeh Rahimi, Jiajin Fan, Kuan Ling Chen
{"title":"Systematic characterization of new EBT4 radiochromic films in clinical x-ray beams.","authors":"Rao Khan, Robabeh Rahimi, Jiajin Fan, Kuan Ling Chen","doi":"10.1088/2057-1976/ad8c49","DOIUrl":"https://doi.org/10.1088/2057-1976/ad8c49","url":null,"abstract":"<p><p><i>Objective</i>. We aim to characterize kinetics of radiation-induced optical density in newly released EBT4 radiochromic films exposed to clinical x-rays. Several film models and batches were evaluated for the film sensitivity, optical signal increasing with time, relative film noise, and minimum detectable limits (MDL).<i>Approach</i>. Radiochromic film pieces from a single batch of EBT3 and three batches of EBT4 were exposed to doses of 77.38 cGy, 386.92 cGy, and 773.84 cGy using a 6 MV x-ray beam. The films were scanned with a flatbed scanner at specific time intervals up to 120 h. The time-series net optical density of red, green and blue colors was corrected for response of the scanner with time and studied to establish the saturation characteristics of film polymerization process. Dose-response from 3.86 cGy to 1935 cGy was also determined for each color. MDL of the films was quantitatively defined as the dose that would double the net optical density of red color above the standard deviation of the residual signal at zero dose. The relative noise characteristics of EBT3 versus EBT4 were studied as a function of time, dose and scanner resolution.<i>Main Results</i>. For doses ≥ 100 cGy, analysis revealed a stability of optical density beyond 48 h post-exposure for EBT3 and EBT4 films. EBT3 films attained 80%-90% of their net optical density at 48 h within minutes of irradiation, compared to 72%-88% for EBT4 films. The rate of growth was slowest for blue color, fastest for red, while green was in between the two. The MDL for EBT4 averaged 15 cGy for three batches, whereas EBT3 films reliably detected doses as low as 8.5 cGy.<i>Significance</i>. Several batches of the new EBT4 film showed slightly lower response compared to its predecessor over 3.86 cGy to 1935 Gy range. For all practical purposes, the post-irradiation growth of polymers ceases between 48 to 60 h for both EBT films. Overall, the EBT4 film exhibited noise characteristics similar to EBT3, except for lower doses where the noise was observed to be higher than its predecessor.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590046","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Manas Ranjan Mohanty, Pradeep Kumar Mallick, Annapareddy V N Reddy
{"title":"Optimizing pulmonary chest x-ray classification with stacked feature ensemble and swin transformer integration.","authors":"Manas Ranjan Mohanty, Pradeep Kumar Mallick, Annapareddy V N Reddy","doi":"10.1088/2057-1976/ad8c46","DOIUrl":"https://doi.org/10.1088/2057-1976/ad8c46","url":null,"abstract":"<p><p>This research presents an integrated framework designed to automate the classification of pulmonary chest x-ray images. Leveraging convolutional neural networks (CNNs) with a focus on transformer architectures, the aim is to improve both the accuracy and efficiency of pulmonary chest x-ray image analysis. A central aspect of this approach involves utilizing pre-trained networks such as VGG16, ResNet50, and MobileNetV2 to create a feature ensemble. A notable innovation is the adoption of a stacked ensemble technique, which combines outputs from multiple pre-trained models to generate a comprehensive feature representation. In the feature ensemble approach, each image undergoes individual processing through the three pre-trained networks, and pooled images are extracted just before the flatten layer of each model. Consequently, three pooled images in 2D grayscale format are obtained for each original image. These pooled images serve as samples for creating 3D images resembling RGB images through stacking, intended for classifier input in subsequent analysis stages. By incorporating stacked pooling layers to facilitate feature ensemble, a broader range of features is utilized while effectively managing complexities associated with processing the augmented feature pool. Moreover, the study incorporates the Swin Transformer architecture, known for effectively capturing both local and global features. The Swin Transformer architecture is further optimized using the artificial hummingbird algorithm (AHA). By fine-tuning hyperparameters such as patch size, multi-layer perceptron (MLP) ratio, and channel numbers, the AHA optimization technique aims to maximize classification accuracy. The proposed integrated framework, featuring the AHA-optimized Swin Transformer classifier utilizing stacked features, is evaluated using three diverse chest x-ray datasets-VinDr-CXR, PediCXR, and MIMIC-CXR. The observed accuracies of 98.874%, 98.528%, and 98.958% respectively, underscore the robustness and generalizability of the developed model across various clinical scenarios and imaging conditions.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142589928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multi-level digital-twin models of pulmonary mechanics: correlation analysis of 3D CT lung volume and 2D Chest motion.","authors":"Cong Zhou, J Geoffrey Chase, Yuhong Chen","doi":"10.1088/2057-1976/ad8c47","DOIUrl":"https://doi.org/10.1088/2057-1976/ad8c47","url":null,"abstract":"<p><p>Creating multi-level digital-twin models for mechanical ventilation requires a detailed estimation of regional lung volume. An accurate generic map between 2D chest surface motion and 3D regional lung volume could provide improved regionalisation and clinically acceptable estimates localising lung damage. This work investigates the relationship between CT lung volumes and the forced vital capacity (FVC) a surrogate of tidal volume proven linked to 2D chest motion. In particular, a convolutional neural network (CNN) with U-Net architecture is employed to build a lung segmentation model using a benchmark CT scan dataset. An automated thresholding method is proposed for image morphology analysis to improve model performance. Finally, the trained model is applied to an independent CT dataset with FVC measurements for correlation analysis of CT lung volume projection to lung recruitment capacity. Model training results show a clear improvement of lung segmentation performance with the proposed automated thresholding method compared to a typically suggested fixed value selection, achieving accuracy greater than 95% for both training and independent validation sets. The correlation analysis for 160 patients shows a good correlation of<i>R</i>squared value of 0.73 between the proposed 2D volume projection and the FVC value, which indicates a larger and denser projection of lung volume relative to a greater FVC value and lung recruitable capacity. The overall results thus validate the potential of using non-contact, non-invasive 2D measures to enable regionalising lung mechanics models to equivalent 3D models with a generic map based on the good correlation. The clinical impact of improved lung mechanics digital twins due to regionalising the lung mechanics and volume to specific lung regions could be very high in managing mechanical ventilation and diagnosing or locating lung injury or dysfunction based on regular monitoring instead of intermittent and invasive lung imaging modalities.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142589994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hoang Thi Yen, Vuong Tri Tiep, Van-Phuc Hoang, Quang-Kien Trinh, Hai-Duong Nguyen, Nguyen Trong Tuyen, Guanghao Sun
{"title":"Radar-based contactless heart beat detection with a modified Pan-Tompkins algorithm.","authors":"Hoang Thi Yen, Vuong Tri Tiep, Van-Phuc Hoang, Quang-Kien Trinh, Hai-Duong Nguyen, Nguyen Trong Tuyen, Guanghao Sun","doi":"10.1088/2057-1976/ad8c48","DOIUrl":"https://doi.org/10.1088/2057-1976/ad8c48","url":null,"abstract":"<p><p><i>Background.</i>Using radar for non-contact measuring human vital signs has garnered significant attention due to its undeniable benefits. However, achieving reasonably good accuracy in contactless measurement senarios is still a technical challenge.<i>Materials and methods.</i>The proposed method includes two stages. The first stage involves the process of datasegmentation and signal channel selection. In the next phase, the raw radar signal from the chosen channel is subjected to modified Pan-Tompkins.<i>Results.</i>The experimental findings from twelve individuals demonstrated a strong agreement between the contactless radar and contact electrocardiography (ECG) devices for heart rate measurement, with correlation coefficient of 98.74 percentage; and the 95% limits of agreement obtained by radar and those obtained by ECG were 2.4 beats per minute.<i>Conclusion.</i>The results showed high agreement between heart rate calculated by radar signals and heart rate by electrocardiograph. This research paves the way for future applications using non-contact sensors to support and potentially replace contact sensors in healthcare.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142590042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lucas Verdi Angelocci, Sabrina Spigaroli Sgrignoli, Carla Daruich de Souza, Paula Cristina Guimarães Antunes, Maria Elisa Chuery Martins Rostelato, Carlos Alberto Zeituni
{"title":"<i>In silico</i>dosimetry for a prostate cancer treatment using<sup>198</sup>Au nanoparticles.","authors":"Lucas Verdi Angelocci, Sabrina Spigaroli Sgrignoli, Carla Daruich de Souza, Paula Cristina Guimarães Antunes, Maria Elisa Chuery Martins Rostelato, Carlos Alberto Zeituni","doi":"10.1088/2057-1976/ad8acc","DOIUrl":"10.1088/2057-1976/ad8acc","url":null,"abstract":"<p><p><i>Objective</i>. To estimate dose rates delivered by using radioactive<sup>198</sup>Au nanoparticles for prostate cancer nanobrachytherapy, identifying contribution by photons and electrons emmited from the source.<i>Approach</i>. Utilizing<i>in silico</i>models, two different anatomical representations were compared: a mathematical model and a unstructured mesh model based on the International Commission on Radiological Protection (ICRP) Publication 145 phantom. Dose rates by activity were calculated to the tumor and nearby healthy tissues, including healthy prostate tissue, urinary bladder wall and rectum, using Monte Carlo code MCNP6.2.<i>Main results</i>. Results indicate that both models provide dose rate estimates within the same order of magnitude, with the mathematical model overestimating doses to the prostate and bladder by approximately 20% compared to the unstructured mesh model. The discrepancies for the tumor and rectum were below 4%. Photons emmited from the source were defined as the primary contributors to dose to other organs, while 97.9% of the dose to the tumor was due to electrons emmited from the source.<i>Significance</i>. Our findings emphasize the importance of model selection in dosimetry, particularly the advantages of using realistic anatomical phantoms for accurate dose calculations. The study demonstrates the feasibility and effectiveness of<sup>198</sup>Au nanoparticles in achieving high dose concentrations in tumor regions while minimizing exposure to surrounding healthy tissues. Beta emissions were found to be predominantly responsible for tumor dose delivery, reinforcing the potential of<sup>198</sup>Au nanoparticles in localized radiation therapy. We advocate for using realistic body phantoms in further research to enhance reliability in dosimetry for nanobrachytherapy, as the field still lacks dedicated protocols.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142494072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A study on sleep posture analysis using fibre bragg grating arrays based mattress.","authors":"Manish Mishra, Prasant Kumar Sahu, Mrinal Datta","doi":"10.1088/2057-1976/ad8b52","DOIUrl":"10.1088/2057-1976/ad8b52","url":null,"abstract":"<p><p>Prolonged sleeping postures or unusual postures can lead to the development of various ailments such as subacromial impingement syndrome, sleep paralysis in the elderly, nocturnal gastroesophageal reflux, sore development, etc Fibre Bragg Gratings (a variety of optical sensors) have gained huge popularity due to their small size, higher sensitivity and responsivity, and encapsulation flexibilities. However, in the present study, FBG Arrays (two FBGs with 10 mm space between them) are employed as they are advantageous in terms of data collection, mitigating sensor location effects, and multiplexing features. In this work, Liquid silicone encapsulated FBG arrays are placed in the head (E), shoulder (C, D), and lower half body (A, B) region for analyzing the strain patterns generated by different sleeping postures namely, Supine (P1), Left Fetus (P2), Right Fetus (P3), and Over stomach (P4). These strain patterns were analyzed in two ways, combined (averaging the data from each FBG of the array) and Individual (data from each FBG was analyzed separately). Both analyses suggested that the FBGs in the arrays responded swiftly to the strain changes that occurred due to changes in sleeping postures. 3D histograms were utilized to track the strain changes and analyze different sleeping postures. A discussion regarding closely related postures and long hour monitoring has also been included. Arrays in the lower half (A, B) and shoulder (C, D) regions proved to be pivotal in discriminating body postures. The average standard deviation of strain for the different arrays was in the range of 0.1 to 0.19 suggesting the reliable and appreciable strain-handling capabilities of the Liquid silicone encapsulated arrays.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142494073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of EEG evoked in 2D and 3D virtual reality: traditional machine learning versus deep learning.","authors":"MingLiang Zuo, BingBing Yu, Li Sui","doi":"10.1088/2057-1976/ad89c5","DOIUrl":"10.1088/2057-1976/ad89c5","url":null,"abstract":"<p><p><i>Backgrounds</i>. Virtual reality (VR) simulates real-life events and scenarios and is widely utilized in education, entertainment, and medicine. VR can be presented in two dimensions (2D) or three dimensions (3D), with 3D VR offering a more realistic and immersive experience. Previous research has shown that electroencephalogram (EEG) profiles induced by 3D VR differ from those of 2D VR in various aspects, including brain rhythm power, activation, and functional connectivity. However, studies focused on classifying EEG in 2D and 3D VR contexts remain limited.<i>Methods</i>. A 56-channel EEG was recorded while visual stimuli were presented in 2D and 3D VR. The recorded EEG signals were classified using two machine learning approaches: traditional machine learning and deep learning. In the traditional approach, features such as power spectral density (PSD) and common spatial patterns (CSP) were extracted, and three classifiers-support vector machines (SVM), K-nearest neighbors (KNN), and random forests (RF)-were used. For the deep learning approach, a specialized convolutional neural network, EEGNet, was employed. The classification performance of these methods was then compared.<i>Results</i>. In terms of accuracy, precision, recall, and F1-score, the deep learning method outperformed traditional machine learning approaches. Specifically, the classification accuracy using the EEGNet deep learning model reached up to 97.86%.<i>Conclusions</i>. EEGNet-based deep learning significantly outperforms conventional machine learning methods in classifying EEG signals induced by 2D and 3D VR. Given EEGNet's design for EEG-based brain-computer interfaces (BCI), this superior classification performance suggests that it can enhance the application of 3D VR in BCI systems.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142494076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An improved AlexNet deep learning method for limb tumor cancer prediction and detection.","authors":"Arunachalam Perumal, Janakiraman Nithiyanantham, Jamuna Nagaraj","doi":"10.1088/2057-1976/ad89c7","DOIUrl":"10.1088/2057-1976/ad89c7","url":null,"abstract":"<p><p>Synovial sarcoma (SS) is a rare cancer that forms in soft tissues around joints, and early detection is crucial for improving patient survival rates. This study introduces a convolutional neural network (CNN) using an improved AlexNet deep learning classifier to improve SS diagnosis from digital pathological images. Key preprocessing steps, such as dataset augmentation and noise reduction techniques, such as adaptive median filtering (AMF) and histogram equalization were employed to improve image quality. Feature extraction was conducted using the Gray-Level Co-occurrence Matrix (GLCM) and Improved Linear Discriminant Analysis (ILDA), while image segmentation targeted spindle-shaped cells using repetitive phase-level set segmentation (RPLSS). The improved AlexNet architecture features additional convolutional layers and resized input images, leading to superior performance. The model demonstrated significant improvements in accuracy, sensitivity, specificity, and AUC, outperforming existing methods by 3%, 1.70%, 6.08%, and 8.86%, respectively, in predicting SS.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142494074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MCI Net: Mamba- Convolutional lightweight self-attention medical image segmentation network.","authors":"Yelin Zhang, Guanglei Wang, Pengchong Ma, Yan Li","doi":"10.1088/2057-1976/ad8acb","DOIUrl":"10.1088/2057-1976/ad8acb","url":null,"abstract":"<p><p>With the development of deep learning in the field of medical image segmentation, various network segmentation models have been developed. Currently, the most common network models in medical image segmentation can be roughly categorized into pure convolutional networks, Transformer-based networks, and networks combining convolution and Transformer architectures. However, when dealing with complex variations and irregular shapes in medical images, existing networks face issues such as incomplete information extraction, large model parameter sizes, high computational complexity, and long processing times. In contrast, models with lower parameter counts and complexity can efficiently, quickly, and accurately identify lesion areas, significantly reducing diagnosis time and providing valuable time for subsequent treatments. Therefore, this paper proposes a lightweight network named MCI-Net, with only 5.48 M parameters, a computational complexity of 4.41, and a time complexity of just 0.263. By performing linear modeling on sequences, MCI-Net permanently marks effective features and filters out irrelevant information. It efficiently captures local-global information with a small number of channels, reduces the number of parameters, and utilizes attention calculations with exchange value mapping. This achieves model lightweighting and enables thorough interaction of local-global information within the computation, establishing an overall semantic relationship of local-global information. To verify the effectiveness of the MCI-Net network, we conducted comparative experiments with other advanced representative networks on five public datasets: X-ray, Lung, ISIC-2016, ISIC-2018, and capsule endoscopy and gastrointestinal segmentation. We also performed ablation experiments on the first four datasets. The experimental results outperformed the other compared networks, confirming the effectiveness of MCI-Net. This research provides a valuable reference for achieving lightweight, accurate, and high-performance medical image segmentation network models.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142494077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sameera Fathimal M, J S Kumar, A Jeya Prabha, Jothiraj Selvaraj, Angeline Kirubha S P
{"title":"Pioneering diabetes screening tool: machine learning driven optical vascular signal analysis.","authors":"Sameera Fathimal M, J S Kumar, A Jeya Prabha, Jothiraj Selvaraj, Angeline Kirubha S P","doi":"10.1088/2057-1976/ad89c8","DOIUrl":"10.1088/2057-1976/ad89c8","url":null,"abstract":"<p><p>The escalating prevalence of diabetes mellitus underscores the critical need for non-invasive screening tools capable of early disease detection. Present diagnostic techniques depend on invasive procedures, which highlights the need for advancement of non-invasive alternatives for initial disease detection. Machine learning in integration with the optical sensing technology can effectively analyze the signal patterns associated with diabetes. The objective of this research is to develop and evaluate a non-invasive optical-based method combined with machine learning algorithms for the classification of individuals into normal, prediabetic, and diabetic categories. A novel device was engineered to capture real-time optical vascular signals from participants representing the three glycemic states. The signals were then subjected to quality assessment and preprocessing to ensure data reliability. Subsequently, feature extraction was performed using time-domain analysis and wavelet scattering techniques to derive meaningful characteristics from the optical signals. The extracted features were subsequently employed to train and validate a suite of machine learning algorithms. An ensemble bagged trees classifier with wavelet scattering features and random forest classifier with time-domain features demonstrated superior performance, achieving an overall accuracy of 86.6% and 80.0% in differentiating between normal, prediabetic, and diabetic individuals based on the optical vascular signals. The proposed non-invasive optical-based approach, coupled with advanced machine learning techniques, holds promise as a potential screening tool for diabetes mellitus. The classification accuracy achieved in this study warrants further investigation and validation in larger and more diverse populations.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":null,"pages":null},"PeriodicalIF":1.3,"publicationDate":"2024-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142494078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}