Natally ALArab, Marrissa McIntosh, Junfeng Guo, Abhilash Srikumar Kizhakke Puliyakote, Jarron Atha, Jessica Sieren, Eric A Hoffman, Ehsan Abadi, Sean B Fain
{"title":"Comparison of Quantitative Lung Measures in Low Dose Energy-Integrating Detector and Photon-Counting Detector Chest CT with an Anthropomorphic Phantom.","authors":"Natally ALArab, Marrissa McIntosh, Junfeng Guo, Abhilash Srikumar Kizhakke Puliyakote, Jarron Atha, Jessica Sieren, Eric A Hoffman, Ehsan Abadi, Sean B Fain","doi":"10.1088/2057-1976/ae0e27","DOIUrl":"https://doi.org/10.1088/2057-1976/ae0e27","url":null,"abstract":"<p><p>Photon-counting detector (PCD) computed tomography (CT) promises improved resolution and contrast at reduced X-ray dose compared to energy-integrating detector (EID) CT. To determine the parameters that achieve robust accuracy of quantitative measures in chest PCD-CT studies compared to quantitative EID-CT at low CT dose. The Kyoto LUNGMAN chest phantom with preserved lung tissue core and NIST-calibrated foam density standards (4-20 lbs.), and the COPD Lung Phantom II with six airways of various outer and inner diameters, were scanned using PCD-CT (NAEOTOM Alpha) and EID-CT (SOMATOM Force) with a target CT dose index (CTDIvol) of 2.2 mGy to match that specified for ongoing longitudinal quantitative chest CT studies of chronic lung disease. Mean density of foam inserts and mean lumen area (LA) and wall thickness (WT) of COPD Lung Phantom II were automatically segmented, analyzed, and compared using the root mean squared error (RMSE). Contrast-to-noise (CNR) and signal-to-noise (SNR) ratios were also automatically calculated. Large (11.2 mm) and small (5.5 mm) airway LA and WT in the lung tissue core were semi-automatically measured. PCD-CT with the Qr40 kernel yielded superior foam density accuracy (RMSE: 6.1-7.8 HU) compared to EID-CT (RMSE: 9.7 HU). Q+UHR mode with Qr64 and a 1024×1024 matrix achieved the highest airway accuracy (RMSE <1.8 mm² for LA and <0.3 mm for WT). However, these protocols showed increased variability in tracheal air measurements (SD up to 9 HU), indicating a trade-off between higher spatial resolution and measurement repeatability. At equivalent low radiation dose (2.2 mGy CTDIvol), PCD-CT outperforms EID-CT in quantitative accuracy for foam density and airway measurements, with comparable SNR and CNR. These results support the use of PCD-CT for quantitative lung imaging in longitudinal studies, provided reconstruction settings are selected to balance accuracy and repeatability.
.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145205438","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Relevancy aware cascaded generative adversarial network for LSO-transmission image denoising in CT-less PET.","authors":"Chetana Krishnan, Mohammadreza Teimoorisichani","doi":"10.1088/2057-1976/ae0591","DOIUrl":"10.1088/2057-1976/ae0591","url":null,"abstract":"<p><p><i>Purpose</i>. Achieving high-quality PET imaging while minimizing scan time and patient radiation dose presents significant challenges, particularly in the absence of CT-based attenuation maps. Joint reconstruction algorithms, such as MLAA and MLACF, partially address these challenges but often result in noisy and less reliable images. Denoising these images is critical for enhancing diagnostic accuracy.<i>Approach</i>. This study introduces a novel cascaded relevancy-aware Generative Adversarial Network (reGAN) to improve the denoising and diagnostic reliability of<i>μ</i>-maps derived from joint reconstruction algorithms, ultimately aimed at enhancing PET imaging quality. The reGAN architecture employs a cascaded design incorporating UPlus GAN modules, relevancy mapping, and contextual attention mechanisms. The model was trained using PET/CT data from 16 patients, with MLAA and MLACF-derived<i>μ</i>-maps as input and CT-based<i>μ</i>-maps as the ground truth. Performance was evaluated using metrics such as SSIM, PSNR, VIF, and MSE. Comparative studies were conducted against other popular 2D and 3D GAN architectures.<i>Results</i>. The proposed reGAN achieved the highest SSIM (0.91 for MLAA and 0.93 for MLACF), PSNR (34.7 dB for MLAA and 36.2 dB for MLACF), and VIF (0.89 for MLAA and 0.91 for MLACF), while maintaining the lowest MSE (0.021 for MLAA and 0.018 for MLACF). Qualitative analysis demonstrated that reGAN preserved fine details, particularly in bony structures, and reduced artifacts effectively. Additionally, relevancy maps provided pixel-wise confidence indicators, further aiding interpretability and diagnostic reliability.<i>Conclusion</i>. reGAN presents a robust approach to medical image denoising, combining advanced generative modeling with diagnostic confidence metrics. The proposed method constitutes a viable approach for achieving quantitative accuracy in low-dose PET imaging in the absence of CT.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145032674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gabriel Chaves de Melo, Arturo Forner-Cordero, Gabriela Castellano
{"title":"The role of the reference electrode in EEG recordings: looking from an inverted perspective.","authors":"Gabriel Chaves de Melo, Arturo Forner-Cordero, Gabriela Castellano","doi":"10.1088/2057-1976/ae093f","DOIUrl":"10.1088/2057-1976/ae093f","url":null,"abstract":"<p><p>The electroencephalographic signal variability caused by the active reference electrode is a major challenge for classification of motor tasks in Brain-Computer Interfaces. In this work a strategy to deal with the reference is proposed: use the information from all channels to extract more reliable information from the reference, the Inverted Perspective Reference Electrode (IPRE). In this novel approach the original set of signals is re-referenced to the electrode of interest, in contrast with all other available methods. At total, eight scenarios were analyzed independently: C3 and C4 as reference electrode, alpha and beta frequency bands, and motor imagery and motor execution tasks. Principal Component Analysis (PCA) was used to extract the information from the reference. This information was analyzed by means of the separability between motor tasks. Thirty-six subsets of electrodes were analyzed, including four typical choices of channels for comparison. A dataset with 109 subjects was used. Results showed that the quantity and location of electrodes are determinant to provide class-separable signals at the reference electrode. The IPRE showed greater separability compared to typical channel choices. Therefore, the strategy revealed better outcomes, encouraging further investigation with the inverted perspective to overcome the challenge of the active reference.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090968","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Scott Greenhorn, Veronique Coizet, Océane Terral, Victor Dupuit, Bruno Fernandez, Guillaume Bres, Arnaud Claudel, Pierre Gasner, Jan Warnking, Emmanuel Barbier, Cécile Delacour
{"title":"Flexible and transparent microelectrode arrays for simultaneous fMRI and single-spike recording in subcortical networks.","authors":"Scott Greenhorn, Veronique Coizet, Océane Terral, Victor Dupuit, Bruno Fernandez, Guillaume Bres, Arnaud Claudel, Pierre Gasner, Jan Warnking, Emmanuel Barbier, Cécile Delacour","doi":"10.1088/2057-1976/ae0d94","DOIUrl":"https://doi.org/10.1088/2057-1976/ae0d94","url":null,"abstract":"<p><p>Current techniques of neuroimaging, including electrical devices, are either of low spatiotemporal resolution or invasive, impeding multiscale monitoring of brain activity at both single-cell and network levels. Overcoming this issue is of great importance to assess the brain's computational ability and for neurorehabilitation projects that require real-time monitoring of neurons and concomitant network activities. Currently, that information could be extracted from functional MRI when combined with mathematical models. Novel combinations of measurement techniques that enable quantitative and long-lasting recording at both single cell and network levels will enable to correlate the MRI data and single cell activity to refine those models. Here, we report the fabrication and validation of ultra-thin, optically transparent, and flexible subcortical microelectrode arrays for combining functional MRI and multisite single-spike recordings. The sensing devices demonstrate both fMRI transparency at 4.7 T and high electrophysiological performance, and thus appear as a promising candidate for simultaneous multiscale neurodynamic measurements.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145197874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahsa Shahrbabaki Mofrad, Ali Ghafari, Amin Amiri Tehranizadeh, Farahnaz Aghahosseini, Mohammad Reza Ay, Saeed Farzenefar, Peyman Sheikhzadeh
{"title":"Assessing the feasibility of deep learning-based attenuation correction using photon emission data in<sup>18</sup>F-FDG images for dedicated head and neck PET scanners.","authors":"Mahsa Shahrbabaki Mofrad, Ali Ghafari, Amin Amiri Tehranizadeh, Farahnaz Aghahosseini, Mohammad Reza Ay, Saeed Farzenefar, Peyman Sheikhzadeh","doi":"10.1088/2057-1976/ae08ba","DOIUrl":"10.1088/2057-1976/ae08ba","url":null,"abstract":"<p><p>This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging. A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test. Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value > 0.05), but significant differences were found in pathological images (p-value < 0.05). The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145085131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Somatotopic non-invasive proprioceptive feedback strategy for prosthetic hands: a preliminary study.","authors":"Olivier Lecompte, Sofiane Achiche, Amandine Gesta, Abolfazl Mohebbi","doi":"10.1088/2057-1976/ae093e","DOIUrl":"10.1088/2057-1976/ae093e","url":null,"abstract":"<p><p><i>Objective.</i>Robotic hand prosthesis users often identify the lack of physiological feedback as a major obstacle to seamless integration. Both the low controllability and high cognitive load required to operate these devices generally lead to their rejection. Consequently, experts highlight sensory feedback as a critical missing features of commercial prostheses. Providing feedback that promotes the integration of artificial limbs is often sought through a biomimetic paradigm, limited by the current technological landscape and the absence of neural embodiment in users. As a result, some researchers are now turning to bio-inspired approaches, choosing to repurpose existing neural structures and focusing on underlying neurocognitive mechanisms that promote the integration of artificial inputs.<i>Approach.</i>Taking a bio-inspired approach, this paper describes the first implementation of a somatotopic, non-invasive proprioceptive feedback strategy for hand prosthesis users, developed using a standard sensory restoration architecture, i.e. pre-processing, encoding and stimulation. The main hypothesis investigated is whether a novel use of transcutaneous electrical stimulation can be leveraged to deliver proprioceptive information of the hand to the user.<i>Main results.</i>The potential of the proposed strategy was highlighted via experimental validation in conveying specific finger apertures and grasp types related to single and multiple degrees of freedom. Six participants were able to identify apertures conveyed by median and ulnar nerve stimulation with an accuracy of 96.5% ± 2.3% and a response time of 0.91 s ± 0.08 s, as well as grasp types conveyed from concurrent median and ulnar nerve stimulation with an accuracy of 88.3% ± 1.2% and a response time of 0.44 s ± 0.27 s through 5 sets of 10 trials.<i>Significance.</i>These results demonstrate the relevance of a somatotopic proprioception feedback strategy for users of prosthetic hands, and the architecture presented in this case study allows for future optimization of the various sub-components.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145090985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohammad Babaei Ghane, Alireza Sadremomtaz, Maryam Saed
{"title":"Design and Simulation of High-Performance PET Scanners Based on Monolithic-like BGO crystals Using GATE Monte Carlo Toolkit.","authors":"Mohammad Babaei Ghane, Alireza Sadremomtaz, Maryam Saed","doi":"10.1088/2057-1976/ae0d93","DOIUrl":"https://doi.org/10.1088/2057-1976/ae0d93","url":null,"abstract":"<p><strong>Background: </strong>PET is a highly sensitive imaging modality for visualizing metabolic processes. 
Objective: This study evaluates PET scanner designs using monolithic-like BGO detector crystals, aimed at enhancing sensitivity while having minimal impact on spatial resolution.
Methods: Two PET scanners with 16 detector heads were simulated using the GATE: (1) a total-body (T-B) scanner with a 105cm axial field of view (AFOV), and (2) a whole-body (W-B) scanner with a 35cm AFOV. Both designs employed 1×1×1.6cm³ BGO monolithic-like crystals. The performance of both scanners was assessed according to NEMA NU-2 2018 standards, including sensitivity, scatter fraction, NECR, and spatial resolution, and was compared with existing scanners. Additionally, point source sensitivity at the center of the scanner was compared with an analytical model to validate the simulation results.
Results: A good agreement was observed between simulated and analytical point source sensitivities, with a maximum deviation of 4%. The T-B and W-B scanners achieved sensitivities of 39.73 and 17.87 kcps/MBq at the center of the FOV. Scatter fractions were 35.5% and 29.1% for the T-B and W-B scanners, respectively. The NECR peak was 3498.2 kcps at ⁓ 21 kBq/mL for the T-B scanner, and 286.8 kcps at ⁓14 kBq/mL for the W-B scanner. Both scanners demonstrated average spatial resolutions of 2.66mm (T-B) and 2.39mm (W-B) at the center of the scanner. At the center of the FOV, the T-B scanner showed 24% and 41.8% higher sensitivity compared to the Biograph-Vision Quadra and Walk-through PET scanners, respectively. Additionally, the W-B scanner showed 8.3% higher sensitivity at the center compared to the Biograph-Vision. The T-B and W-B scanners achieved 23% and 37.5% better spatial resolution at the scanner center compared to Biograph-Vision Quadra and Biograph-Vision, respectively.
Conclusions: The proposed PET scanners with monolithic-like BGO crystals showed promising sensitivity and resolution, indicating improved PET imaging potential.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145197893","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"TFDISNet: Temporal-frequency domain-invariant and domain-specific feature learning network for enhanced auditory attention decoding from EEG signals.","authors":"Zhongcai He, Yongxiong Wang","doi":"10.1088/2057-1976/ae09b2","DOIUrl":"10.1088/2057-1976/ae09b2","url":null,"abstract":"<p><p>Auditory Attention Decoding (AAD) from Electroencephalogram (EEG) signals presents a significant challenge in brain-computer interface (BCI) research due to the intricate nature of neural patterns. Existing approaches often fail to effectively integrate temporal and frequency domain information, resulting in constrained classification accuracy and robustness. To address these shortcomings, a novel framework, termed the Temporal-Frequency Domain-Invariant and Domain-Specific Feature Learning Network (TFDISNet), is proposed to enhance AAD performance. A dual-branch architecture is utilized to independently extract features from the temporal and frequency domains, which are subsequently fused through an advanced integration strategy. Within the fusion module, shared features, common across both domains, are aligned by minimizing a similarity loss, while domain-specific features, essential for the task, are preserved through the application of a dissimilarity loss. Additionally, a reconstruction loss is employed to ensure that the fused features accurately represent the original signal. These fused features are then subjected to classification, effectively capturing both shared and unique characteristics to improve the robustness and accuracy of AAD. Experimental results show TFDISNet outperforms state-of-the-art models, achieving 97.1% accuracy on the KUL dataset and 88.2% on the DTU dataset with a 2 s window, validated across group, subject-specific, and cross-subject analyses. Component studies confirm that integrating temporal and frequency features boosts performance, with the full TFDISNet surpassing its variants. Its dual-branch design and advanced loss functions establish a robust EEG-based AAD framework, setting a new field standard.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145124100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"EigenU-Net: integrating eigenvalue decomposition of the Hessian into U-Net for 3D coronary artery segmentation.","authors":"Cathy Ong Ly, Chris McIntosh","doi":"10.1088/2057-1976/ae08bb","DOIUrl":"10.1088/2057-1976/ae08bb","url":null,"abstract":"<p><p><i>Objective</i>. Coronary artery segmentation is critical in medical imaging for the diagnosis and treatment of cardiovascular disease. However, manual segmentation of the coronary arteries is time-consuming and requires a high level of training and expertise.<i>Approach</i>. Our model, EigenU-Net, presents a novel approach to coronary artery segmentation of cardiac computed tomography angiography (CCTA) images that seeks to directly embed the geometrical properties of tubular structures, i.e. arteries, into the model. To examine the local structure of objects in the image we have integrated a closed-form solution of the eigenvalues of the Hessian matrix of each voxel for input into an U-Net based architecture.<i>Main results</i>. We demonstrate the feasibility and potential of our approach on the public IMAGECAS dataset consisting of 1000 CCTAs. The best performing model at 87% centerline Dice was EigenU-Net with Gaussian pre-filtering of the images.<i>Significance</i>. We were able to directly integrate the calculation of eigenvalues into our model EigenU-Net, to capture more information about the structure of the coronary vessels. EigenU-Net was able to segment regions that were overlooked by other models.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145085123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Machine learning based classification of imagined speech electroencephalogram data from the amplitude and phase spectrum of frequency domain EEG signal.","authors":"Meenakshi Bisla, Radhey Shyam Anand","doi":"10.1088/2057-1976/ae04ee","DOIUrl":"10.1088/2057-1976/ae04ee","url":null,"abstract":"<p><p>Imagined speech classification involves decoding brain signals to recognize verbalized thoughts or intentions without actual speech production. This technology has significant implications for individuals with speech impairments, offering a means to communicate through neural signals. The prime objective of this work is to propose an innovative machine learning (ML) based classification methodology that combines electroencephalogram (EEG) data augmentation using a sliding window technique with statistical feature extraction from the amplitude and phase spectrum of frequency domain EEG segments. This work uses an EEG dataset recorded from a 64-channel device during the imagination of long words, short words, and vowels with 15 human subjects. First, the raw EEG data is filtered between 1 Hz and 100 Hz, then segmented using a sliding window-based data augmentation technique with a window size of 100 and 50% overlap. The Fourier Transform is applied to each windowed segment to compute the amplitude and phase spectrum of the signal at each frequency point. The next step is to extract 50 statistical features from the amplitude and phase spectrum of frequency domain segments. Out of these, the 25 most statistically significant features are selected by applying the Kruskal-Walli's test. The extracted feature vectors are classified using six different machine learning based classifiers named support vector machine (SVM), K nearest neighbor (KNN), Random Forest (RF), XGBoost, LightGBM, and CatBoost. The CatBoost classifier outperforms other machine learning classifiers by achieving the highest accuracy of 91.72 ± 1.52% for long words classification, 91.68 ± 1.54% for long versus short word classification, 88.05 ± 3.07% for short word classification, and 88.89 ± 1.97% for vowel classification. The performance of the proposed model is assessed using five performance evaluation metrics: accuracy, F1-score, precision, recall, and Cohen's kappa. Compared to the existing literature, this study has achieved a 5%-7% improvement with the CatBoost classifier and extracted feature matrix.</p>","PeriodicalId":8896,"journal":{"name":"Biomedical Physics & Engineering Express","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2025-09-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145028897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}