{"title":"Classification of sleep apnea syndrome using the spectrograms of EEG signals and YOLOv8 deep learning model.","authors":"Kubra Tanci, Mahmut Hekim","doi":"10.7717/peerj-cs.2718","DOIUrl":null,"url":null,"abstract":"<p><p>In this study, we focus on classifying sleep apnea syndrome by using the spectrograms obtained from electroencephalogram (EEG) signals taken from polysomnography (PSG) recordings and the You Only Look Once (YOLO) v8 deep learning model. For this aim, the spectrograms of segments obtained from EEG signals with different apnea-hypopnea values (AHI) using a 30-s window function are obtained by short-time Fourier transform (STFT). The spectrograms are used as inputs to the YOLOv8 model to classify sleep apnea syndrome as mild, moderate, severe apnea, and healthy. For four-class classification models, the standard reference level is 25%, assuming equal probabilities for all classes or an equal number of samples in each class. In this context, this information is an important reference point for the validity of our study. Deep learning methods are frequently used for the classification of EEG signals. Although ResNet64 and YOLOv5 give effective results, YOLOv8 stands out with fast processing times and high accuracy. In the existing literature, parameter reduction approaches in four-class EEG classification have not been adequately addressed and there are limitations in this area. This study evaluates the performance of parameter reduction methods in EEG classification using YOLOv8, fills gaps in the existing literature for four-class classification, and reduces the number of parameters of the used models. Studies in the literature have generally classified sleep apnea syndrome as binary (apnea/healthy) and ignored distinctions between apnea severity levels. Furthermore, most of the existing studies have used models with a high number of parameters and have been computationally demanding. In this study, on the other hand, the use of spectrograms is proposed to obtain higher correct classification ratios by using more accurate and faster models. The same classification experiments are reimplemented for widely used ResNet64 and YOLOv5 deep learning models to compare with the success of the proposed model. In the implemented experiments, total correct classification (TCC) ratios are 93.7%, 93%, and 88.2% for YOLOv8, ResNet64, and YOLOv5, respectively. These experiments show that the YOLOv8 model reaches higher success ratios than the ResNet64 and YOLOv5 models. Although the TCC ratios of the YOLOv8 and ResNet64 models are comparable, the YOLOv8 model uses fewer parameters and layers than the others, providing a faster processing time and a higher TCC ratio. The findings of the study make a significant contribution to the current state of the art. As a result, this study gives rise to the idea that the YOLOv8 deep learning model can be used as a new tool for classification of sleep apnea syndrome from EEG signals.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"11 ","pages":"e2718"},"PeriodicalIF":3.5000,"publicationDate":"2025-02-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11888935/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2718","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2025/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
In this study, we focus on classifying sleep apnea syndrome by using the spectrograms obtained from electroencephalogram (EEG) signals taken from polysomnography (PSG) recordings and the You Only Look Once (YOLO) v8 deep learning model. For this aim, the spectrograms of segments obtained from EEG signals with different apnea-hypopnea values (AHI) using a 30-s window function are obtained by short-time Fourier transform (STFT). The spectrograms are used as inputs to the YOLOv8 model to classify sleep apnea syndrome as mild, moderate, severe apnea, and healthy. For four-class classification models, the standard reference level is 25%, assuming equal probabilities for all classes or an equal number of samples in each class. In this context, this information is an important reference point for the validity of our study. Deep learning methods are frequently used for the classification of EEG signals. Although ResNet64 and YOLOv5 give effective results, YOLOv8 stands out with fast processing times and high accuracy. In the existing literature, parameter reduction approaches in four-class EEG classification have not been adequately addressed and there are limitations in this area. This study evaluates the performance of parameter reduction methods in EEG classification using YOLOv8, fills gaps in the existing literature for four-class classification, and reduces the number of parameters of the used models. Studies in the literature have generally classified sleep apnea syndrome as binary (apnea/healthy) and ignored distinctions between apnea severity levels. Furthermore, most of the existing studies have used models with a high number of parameters and have been computationally demanding. In this study, on the other hand, the use of spectrograms is proposed to obtain higher correct classification ratios by using more accurate and faster models. The same classification experiments are reimplemented for widely used ResNet64 and YOLOv5 deep learning models to compare with the success of the proposed model. In the implemented experiments, total correct classification (TCC) ratios are 93.7%, 93%, and 88.2% for YOLOv8, ResNet64, and YOLOv5, respectively. These experiments show that the YOLOv8 model reaches higher success ratios than the ResNet64 and YOLOv5 models. Although the TCC ratios of the YOLOv8 and ResNet64 models are comparable, the YOLOv8 model uses fewer parameters and layers than the others, providing a faster processing time and a higher TCC ratio. The findings of the study make a significant contribution to the current state of the art. As a result, this study gives rise to the idea that the YOLOv8 deep learning model can be used as a new tool for classification of sleep apnea syndrome from EEG signals.
在本研究中,我们重点利用从多导睡眠图(PSG)记录的脑电图(EEG)信号中获得的频谱图和You Only Look Once (YOLO) v8深度学习模型对睡眠呼吸暂停综合征进行分类。为此,利用30秒窗函数对不同呼吸暂停低通气值(AHI)的脑电图信号进行短时傅里叶变换(STFT),得到各片段的频谱图。这些频谱图被用作YOLOv8模型的输入,用于将睡眠呼吸暂停综合征分为轻度、中度、重度和健康。对于四类分类模型,标准参考水平为25%,假设所有类别的概率相等或每个类别的样本数量相等。在此背景下,这些信息是我们研究有效性的重要参考点。深度学习方法经常用于脑电信号的分类。虽然ResNet64和YOLOv5给出了有效的结果,但YOLOv8以快速的处理时间和高准确性脱颖而出。在现有文献中,四类脑电分类的参数约简方法尚未得到充分的研究,存在一定的局限性。本研究利用YOLOv8评估了参数约简方法在EEG分类中的性能,填补了现有文献中四类分类的空白,并减少了所用模型的参数数量。文献研究一般将睡眠呼吸暂停综合征分为二元(呼吸暂停/健康),忽略了呼吸暂停严重程度之间的区别。此外,现有的大多数研究都使用了具有大量参数的模型,并且对计算量要求很高。另一方面,本研究提出使用谱图,通过使用更准确和更快的模型来获得更高的正确分类率。对广泛使用的ResNet64和YOLOv5深度学习模型重新进行了相同的分类实验,以比较所提出模型的成功。在实现的实验中,YOLOv8、ResNet64和YOLOv5的总正确分类(TCC)率分别为93.7%、93%和88.2%。这些实验表明,YOLOv8模型比ResNet64和YOLOv5模型具有更高的成功率。虽然YOLOv8和ResNet64模型的TCC比率是相当的,但YOLOv8模型使用的参数和层数比其他模型少,提供了更快的处理时间和更高的TCC比率。这项研究的结果对目前的技术状况作出了重大贡献。因此,本研究提出了YOLOv8深度学习模型可以作为从脑电图信号中分类睡眠呼吸暂停综合征的新工具。
期刊介绍:
PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.