{"title":"Cervical vertebral maturation assessment using an innovative artificial intelligence-based imaging analysis system","authors":"","doi":"10.1016/j.bspc.2024.107088","DOIUrl":"10.1016/j.bspc.2024.107088","url":null,"abstract":"<div><div>The Cervical Vertebral Maturation (CVM) assessment plays a pivotal role in orthodontic diagnosis and treatment planning by providing insights into skeletal growth and enabling timely interventions. This study introduces an innovative approach to predict CVM stages based on novel imaging markers extracted from X-ray images, which are then correlated with CVM stages. The proposed system comprises the following main steps: (i) initiating with manually delineated cervical vertebrae (i.e., C2, C3, and C4) from the X-ray images; (ii) parcellating the cervical vertebrae based on the Marching level-sets approach to generate five iso-contours for each segmented cervical vertebra; the primary objective of vertebrae segmentation is to extract both local and global imaging markers to accurately grade and classify CVM stages; (iii) extracting first and second-order appearance and morphology imaging markers that describe the shape and appearance of each extracted cervical vertebra; and (iv) employing two-stage classifiers to grade and classify CVM for each patient. The system without data augmentation demonstrated promising results, achieving an accuracy of 95.85%, sensitivity of 88.03%, specificity of 97.20%, and precision of 88.70%. After applying data augmentation techniques, the accuracy improved to 98.89%, with a mean score of 97.20%. To the best of our knowledge, this is the first system to assess the six stages of CVM with such high accuracy. The proposed AI-based system will enhance orthodontic patient care in the USA and worldwide by providing a new non-invasive tool for early CVM assessment.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142551985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A generative adversarial network based on deep supervision for anatomical and functional image fusion","authors":"","doi":"10.1016/j.bspc.2024.107011","DOIUrl":"10.1016/j.bspc.2024.107011","url":null,"abstract":"<div><div>Medical image fusion techniques improve single-image representations by integrating salient information from medical images of different modalities. However, existing fusion methods suffer from limitations, such as vanishing gradients, blurred details, and low efficiency. To alleviate these problems, a generative adversarial network based on deep supervision (DSGAN) is proposed. First, a two-branch structure is proposed to separately extract salient information, such as texture and metabolic information, from different modal images. Self-supervised learning is performed by building a new deep supervision module to enhance effective feature extraction. The fusion and multimodal input images are then placed in the discriminator for computation. Adversarial loss based on the Earth Mover’s distance ensures that more spatial frequency, gradient, and contrast information are maintained in a fusion image, and makes model training more stable. In addition, DSGAN is an end-to-end model that does not manually set up complex fusion rules. Compared with classic fusion methods, the proposed DSGAN retains rich texture details and edge information in the input image, fuses images faster, and exhibits superior performance in objective evaluation metrics.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142551983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Studying of deep neural networks and delta and alpha sub-bands harmony signals for Prediction of epilepsy","authors":"","doi":"10.1016/j.bspc.2024.107066","DOIUrl":"10.1016/j.bspc.2024.107066","url":null,"abstract":"<div><div>Epilepsy, a seizure disorder, is one of the significant diseases in the global community. More than 1% of the world’s population is affected by this disease. It is controlled with medicine in a mild case. Neurologists use Electroencephalography (EEG) to diagnose epilepsy in most medical centers and hospitals. In recent years, researchers have conducted numerous studies to estimate epilepsy attacks using EEG. In this study, a new method is presented to enhance the accuracy, sensitivity, and other necessary parameters for estimating epilepsy attacks. In the proposed algorithm, the processing of brain signals is performed in two stages. In the first stage, the brain signals are decomposed into delta, theta, beta and alpha sub-bands using Discrete Wavelet Transform (DWT). Subsequently, the accuracy of the sub-bands are analyzed using a Long Short-Term Memory Neural Network (LSTM). Sub-bands with an accuracy of over 70% are selected for the second stage. In the second processing stage, selected sub-band harmonic signal images are used as input for a convolutional neural network (CNN) to extract features and make the final decision. The use of the proposed method results in an improvement in all parameters for estimating epilepsy attacks, including accuracy, sensitivity, and AUC. The results of the proposed method show a 45% increase compared to the conventional method.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"MFCPNet: Real time medical image segmentation network via multi-scale feature fusion and channel pruning","authors":"","doi":"10.1016/j.bspc.2024.107074","DOIUrl":"10.1016/j.bspc.2024.107074","url":null,"abstract":"<div><div>Real-time medical image segmentation can not only enhance the interactivity and feasibility of applications but also support more medical application scenarios. Local feature extraction methods reliant on Convolutional Neural Networks (CNN) are hampered by restricted receptive fields, which weakens their ability to capture comprehensive information. Conversely, global feature extraction methods based on Transformers generally face impediments in real-time tasks due to their extensive computational demands. To address these challenges and explore accurate and real-time medical image segmentation models, we introduce this novel MFCPNet. MFCPNet begins by devising Multi-Scale Multi-Channel Convolution (MSMC Conv) to extract local features across various levels and scales. This innovative design contributes to extracting richer local information without unduly burdening the model. Second, for the enhanced receptive field of convolution and the model’s generalization capability, we introduce an Attention Block (Attn Block) carrying rotation invariance. This block, inspired by lightweight Bi-Level Routing Attention (BRA) and MLP-Mixer, effectively mitigates the constraints of convolutional structures and achieves superior contextual modeling. Finally, a judicious pruning of the channel count is employed within MFCPNet, striking a trade-off between segmentation accuracy and efficiency. To evaluate the proposed method, we compare it with several classic approaches using three different types of datasets: retinal images, brain scans, and colon polyps. Across these datasets, MFCPNet achieves segmentation performance comparable to existing methods, with a computational cost of 2.2G FLOPs and 0.49M parameters. Furthermore, it demonstrates a processing speed of 79.54 FPS, meeting the requirements for real-time applications.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142537890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep-learning based fusion of spatial relationship classification between mandibular third molar and inferior alveolar nerve using panoramic radiograph images","authors":"","doi":"10.1016/j.bspc.2024.107059","DOIUrl":"10.1016/j.bspc.2024.107059","url":null,"abstract":"<div><div>It is crucial for clinicians to have a prior knowledge of spatial relationship between impacted mandibular third molar tooth (MM3) and inferior alveolar nerve (IAN) before an extraction procedure. This relationship may exist in four spatial forms in terms of IAN position relative to MM3 although it has not been studied extensively. To identify such relationship type, on the other hand, this study proposes a novel four-class classification framework utilizing fusion of AlexNet, VGG16, VGG19 deep learning methods using panoramic radiograph (PR) images. For this purpose, 546 PR images of impacted MM3, collected from 290 patients, were labeled by specialists using corresponding cone beam computed tomography (CBCT) images. The proposed network is trained and tested using 10 folds cross validation. Experimental studies were performed in different categories. In the first (MM3 and IAN are related/unrelated) an accuracy rate of 94.1% was obtained. In the following IAN resides on the lingual or vestibule (buccal) side of MM3 classification problem, test result of 80.6% accuracy was obtained. Finally, in the challenging four-class classification problem that includes unrelated, lingual, vestibule and other classes, an accuracy rate of 79.7% was achieved. Obtained results show that the proposed method not only presents state-of-the-art results but also suggests a new classification basis for the existing MM3-IAN relationship problem.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"DSU-Net: Dual-Stage U-Net based on CNN and Transformer for skin lesion segmentation","authors":"","doi":"10.1016/j.bspc.2024.107090","DOIUrl":"10.1016/j.bspc.2024.107090","url":null,"abstract":"<div><div>Precise delineation of skin lesions from dermoscopy pictures is crucial for enhancing the quantitative analysis of melanoma. However, this remains a difficult endeavor due to inherent characteristics such as large variability in lesion size, form, and fuzzy boundaries. In recent years, CNNs and Transformers have indicated notable benefits in the area of skin lesion segmentation. Hence, we first propose the DSU-Net segmentation network, which is inspired by the manual segmentation process. Through the coordination mechanism of the two segmentation sub-networks, the simulation of a process occurs where the lesion area is initially coarsely identified and then meticulously delineated. Then, we propose a two-stage balanced loss function to better simulate the manual segmentation process by adaptively controlling the loss weight. Further, we introduce a multi-feature fusion module, which combines various feature extraction modules to extract richer feature information, refine the lesion area, and obtain accurate segmentation boundaries. Finally, we conducted extensive experiments on the ISIC2017, ISIC2018, and PH2 datasets to assess and validate the efficacy of the DSU-Net by comparing it to the most advanced approaches currently available. The codes are available at <span><span>https://github.com/ZhongLongwei/DSU-Net</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142539559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An automatic segmentation of calcified tissue in forward-looking intravascular ultrasound images","authors":"","doi":"10.1016/j.bspc.2024.107095","DOIUrl":"10.1016/j.bspc.2024.107095","url":null,"abstract":"<div><div>The assessment of images of the coronary artery system plays a crucial part in the diagnosis and treatment of cardiovascular diseases (CVD). Forward-looking intravascular ultrasound (FL-IVUS) has a distinct advantage in assessing CVD due to its superior resolution and imaging capability, especially in severe calcification scenarios. The demarcation of the lumen and media-adventitia, as well as the identification of calcified tissue information, constitute the initial steps in assessing of CVD such as atherosclerosis using FL-IVUS images. In this research, we introduced a novel approach for automated lumen segmentation and identification of calcified tissue in FL-IVUS images. The proposed method utilizes superpixel segmentation and fuzzy C-means clustering (FCM) to identify regions that potentially correspond to lumina. Furthermore, connected component labeling and active contour methods are employed to refine the contours of lumina. To handle the distinctive depth information found in FL-IVUS images, ellipse fitting and region detectors are applied to identify areas with calcified tissue. In our dataset consisting of 43 FL-IVUS images, this method achieved mean values for Jaccard measure, Dice coefficient, Hausdorff distance, and percentage area difference at 0.952 ± 0.016, 0.975 ± 0.008, 0.296 ± 0.186, and 0.019 ± 0.010, respectively. Furthermore, when compared with traditional segmentation approaches, the proposed approach yields higher images quality. The test results demonstrate the effectiveness of this innovative automated segmentation technique for detecting the lumina and calcified tissue in FL-IVUS images.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep neural network model for diagnosing diabetic retinopathy detection: An efficient mechanism for diabetic management","authors":"","doi":"10.1016/j.bspc.2024.107035","DOIUrl":"10.1016/j.bspc.2024.107035","url":null,"abstract":"<div><div>Diabetic retinopathy (DR) is a common eye disease and a notable starting point of blindness in diabetic patients. Detecting the existence of a microaneurysm in the fundus images and the identification of DR in the preliminary stage has been a considerable question for decades. Systematic screening and interference are the most efficient mechanisms for disease management. The sizeable populations of diabetic patients and their enormous screening requirements have given rise to the computer-aided and automatic diagnosis of DR. The utilization of Deep Neural Networks in DR diagnosis has also attracted much attention and considerable advancement has been made. Diabetic retinopathy (DR) includes sensitivity and specificity that are particular to the Diabetic tested (see section on probabilistic reasoning). Even if a test is performed correctly, there is a chance for a false positive or false negative result. However, despite the several advancements that have been made, there remains room for improvement in the sensitivity and specificity of the DR diagnosis. In this work, a novel method called the Luminosity Normalized Symmetric Deep Convolute Tubular Classifier (LN-SDCTC) for DR detection is proposed. The LN-SDCTC method is split into two parts. Initially, with the retinal color fundus images as input, the Luminosity Normalized Retinal Color Fundus Preprocessing model applies to produce a noise-minimized enhanced contrast image. Second, we provide the get-processed image as input to the Symmetric Deep Convolute network. Here, with the aid of the convolutional layer (i.e., the Tubular Neighborhood Window), the average pooling layer (i.e., average magnitude value of tubular neighbors), and the max-pooling layer (i.e., maximum contrast orientation), relevant features are selected. Finally, with the extracted features as input and with the aid of the Multinomial Regression Classification function, the severity of the DR disease is determined. Extensive experimental results in terms of peak signal-to-noise ratio, disease detection time, sensitivity, and specificity reveal that the proposed method of DR detection greatly facilitates the deep learning model and yields better results than various state-of-art methods.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530624","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic skin tumor detection in dermoscopic samples using Online Patch Fuzzy Region Based Segmentation","authors":"","doi":"10.1016/j.bspc.2024.107096","DOIUrl":"10.1016/j.bspc.2024.107096","url":null,"abstract":"<div><div>Skin tumor detection and classification have an important role which is applied in the field of research, particularly in the field of medical diagnosis. The classification of tumors in skin cells is of more significance since the number of affected people is increasing. The focus of this research work is to come up with a new and efficient method of enhancing skin images as well as identifying tumors from other areas on computed tomographic skin images. This work is mainly concerned with medical application methods on computed tomography (CT) skin tumor images that are developed and applied effectively. The first step is acquiring images. It can be seen that the Boosted Notch Diffusion Filtering − Mean Pixel Histogram Equalization (BNDF-MPHE) algorithm serves as the preprocessing step within the context of the presented model. The proposed step involves Superpixel Contour Metric Segment Clustering (SCMSC) followed by an Online Patch Fuzzy Region Based Segmentation (OPFRBS) Algorithm for effective segmentation of the skin tumor cells with an accuracy of 99.25% for benign and 97.39% for malignant tumors respectively. The time required for processing the lesion is less than 2 sec. The proposed method uses MATLAB 2024a workbench and accuracy is quite higher compared with other existing algorithms for both benign and malignant samples respectively. The proposed research methodology has been validated with real-time clinical samples effectively and throws light on the patient’s life to resume normalcy and live long.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142530626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A comparative study between laser speckle contrast imaging in transmission and reflection modes by adaptive window space direction contrast algorithm","authors":"","doi":"10.1016/j.bspc.2024.107091","DOIUrl":"10.1016/j.bspc.2024.107091","url":null,"abstract":"<div><div>Blood flow visualization is of paramount importance in diagnosing and treating vascular diseases. Laser speckle contrast imaging (LSCI) is a widely utilized technique for visualizing blood flow. However, Reflect-laser speckle contrast imaging (R-LSCI) systems are limited in their imaging depth and primarily suitable for shallow blood flow imaging. In this study, we conducted a comparative analysis of Transmissive-laser speckle contrast imaging (T-LSCI) and R-LSCI using four spatial domain imaging methods: spatial contrast (sK), adaptive window contrast (awK), space-directional contrast (sdK), and adaptive window space direction contrast (awsdK), for deep blood flow imaging. Experimental results show that T-LSCI is superior to R-LSCI in imaging deep blood flow within a certain thickness of tissue. T-LSCI can be used for continuous non-invasive blood flow monitoring. Particularly, the utilization of the awsdK method in T-LSCI substantially improves the visualization of deep blood flow and enhances the ability to monitor blood flow variations.</div></div>","PeriodicalId":55362,"journal":{"name":"Biomedical Signal Processing and Control","volume":null,"pages":null},"PeriodicalIF":4.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142533166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}