Neural Computing and Applications最新文献

筛选
英文 中文
Evidential neural network for tensile stress uncertainty quantification in thermoplastic elastomers 用于热塑性弹性体拉伸应力不确定性量化的证据神经网络
Neural Computing and Applications Pub Date : 2024-08-15 DOI: 10.1007/s00521-024-10320-0
Alejandro E. Rodríguez-Sánchez
{"title":"Evidential neural network for tensile stress uncertainty quantification in thermoplastic elastomers","authors":"Alejandro E. Rodríguez-Sánchez","doi":"10.1007/s00521-024-10320-0","DOIUrl":"https://doi.org/10.1007/s00521-024-10320-0","url":null,"abstract":"<p>This work presents the use of artificial neural networks (ANNs) with deep evidential regression to model the tensile stress response of a thermoplastic elastomer (TPE) considering uncertainty. Three Gaussian noise scenarios were added to a previous dataset of a TPE to simulate noise in the stress response. The trained ANN models were able to address stress–strain data that were not used for their training or validation, even in the presence of noise. The uncertainty in all tested ANN scenarios comprised, within ± <span>(3sigma)</span>, the noisy data of the TPE stress response. The method was extended to other grades of Hytrel material with ANN architectures that obtained results with a coefficient of determination of about 0.9. These results suggest that shallow neural networks, equipped and trained using evidential output layers and an evidential regression loss, can predict, generalize, and simulate noisy tensile stress responses in TPE materials.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Traffic sign detection and recognition based on MMS data using YOLOv4-Tiny algorithm 使用 YOLOv4-Tiny 算法基于 MMS 数据进行交通标志检测和识别
Neural Computing and Applications Pub Date : 2024-08-14 DOI: 10.1007/s00521-024-10279-y
Hilal Gezgin, Reha Metin Alkan
{"title":"Traffic sign detection and recognition based on MMS data using YOLOv4-Tiny algorithm","authors":"Hilal Gezgin, Reha Metin Alkan","doi":"10.1007/s00521-024-10279-y","DOIUrl":"https://doi.org/10.1007/s00521-024-10279-y","url":null,"abstract":"<p>Traffic signs have great importance in driving safety. For the recently emerging autonomous vehicles, that can automatically detect and recognize all road inventories such as traffic signs. Firstly, in this study, a method based on a mobile mapping system (MMS) is proposed for the detection of traffic signs to establish a Turkish traffic sign dataset. Obtaining images from real traffic scenes using the MMS method enhances the reliability of the model. It is an easy method to be applied to real life in terms of both cost and suitability for mobile and autonomous systems. In this frame, YOLOv4-Tiny, one of the object detection algorithms, that is considered to be more suitable for mobile vehicles, is used to detect and recognize traffic signs. This algorithm is low operation cost and more suitable for embedded devices due to its simple neural network structure compared to other algorithms. It is also a better option for real-time detection than other approaches. For the training of the model in the suggested method, a dataset consisting partly of images taken with MMS based on realistic field measurement and partly of images obtained from open data sets was used. This training resulted in the mean average precision (mAP) value being obtained as 98.1%. The trained model was first tested on existing images and then tested in real time in a laboratory environment using a simple fixed web camera. The test results show that the suggested method can improve driving safety by detecting traffic signs quickly and accurately, especially for autonomous vehicles. Therefore, the proposed method is considered suitable for use in autonomous vehicles.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188404","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PRF: deep neural network compression by systematic pruning of redundant filters PRF:通过系统修剪冗余滤波器压缩深度神经网络
Neural Computing and Applications Pub Date : 2024-08-14 DOI: 10.1007/s00521-024-10256-5
C. H. Sarvani, Mrinmoy Ghorai, S. H. Shabbeer Basha
{"title":"PRF: deep neural network compression by systematic pruning of redundant filters","authors":"C. H. Sarvani, Mrinmoy Ghorai, S. H. Shabbeer Basha","doi":"10.1007/s00521-024-10256-5","DOIUrl":"https://doi.org/10.1007/s00521-024-10256-5","url":null,"abstract":"<p>In deep neural networks, the filters of convolutional layers play an important role in extracting the features from the input. Redundant filters often extract similar features, leading to increased computational overhead and larger model size. To address this issue, a two-step approach is proposed in this paper. First, the clusters of redundant filters are identified based on the cosine distance between them using hierarchical agglomerative clustering (HAC). Next, instead of pruning all the redundant filters from every cluster in single-shot, we propose to prune the filters in a systematic manner. To prune the filters, the cluster importance among all clusters and filter importance within each cluster are identified using the <span>(ell _1)</span>-norm based criterion. Then, based on the pruning ratio filters from the least important cluster to the most important ones are pruned systematically. The proposed method showed better results compared to other clustering-based works. The benchmark datasets CIFAR-10 and ImageNet are used in the experiments. After pruning 83.92% parameters from VGG-16 architecture, an improvement over the baseline is observed. After pruning 54.59% and 49.33% of the FLOPs from ResNet-56 and ResNet-110, respectively, both showed an improvement in accuracy. After pruning 52.97% of the FLOPs, the top-5 accuracy of ResNet-50 drops by only 0.56 over ImageNet.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A two-stage algorithm for heterogeneous face recognition using Deep Stacked PCA Descriptor (DSPD) and Coupled Discriminant Neighbourhood Embedding (CDNE) 使用深度堆积 PCA 描述符 (DSPD) 和耦合判别邻域嵌入 (CDNE) 的两阶段异构人脸识别算法
Neural Computing and Applications Pub Date : 2024-08-14 DOI: 10.1007/s00521-024-10272-5
Shubhobrata Bhattacharya
{"title":"A two-stage algorithm for heterogeneous face recognition using Deep Stacked PCA Descriptor (DSPD) and Coupled Discriminant Neighbourhood Embedding (CDNE)","authors":"Shubhobrata Bhattacharya","doi":"10.1007/s00521-024-10272-5","DOIUrl":"https://doi.org/10.1007/s00521-024-10272-5","url":null,"abstract":"<p>Automatic face recognition has made significant progress in recent decades, particularly in controlled environments. However, recognizing faces across different modalities, known as Heterogeneous Face Recognition, presents challenges due to variations in modality gaps. This paper addresses the problem of HFR by proposing a two-stage algorithm. In the first stage, a deep stacked PCA descriptor (DSPD) is introduced to extract domain-invariant features from face images of different modalities. The DSPD utilizes multiple convolution layers of domain-trained PCA filters, and the features extracted from each layer are concatenated to obtain a final feature representation. Additionally, pre-processing steps are applied to input images to enhance the prominence of facial edges, making the features more distinctive. The obtained DSPD features can be directly used for recognition using nearest neighbour algorithms. To further improve recognition robustness, a coupled subspace called coupled discriminant neighbourhood embedding (CDNE) is proposed in the second stage. CDNE is trained with a limited number of data samples and can project DSPD features from different modalities onto a common subspace. In this subspace, data points representing the same subjects from different modalities are positioned closely, while those of different subjects are positioned apart. This spatial arrangement enhances the recognition of heterogeneous faces using nearest neighbour algorithms. Experimental results demonstrate the effectiveness of the proposed algorithm on various HFR scenarios, including VIS-NIR, VIS-Sketch, and VIS-Thermal face pairs from respective databases. The algorithm shows promising performance in addressing the challenges posed by the modality gap, providing a potential solution for accurate and robust Heterogeneous Face Recognition.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188403","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Gene expression clock: an unsupervised deep learning approach for predicting circadian rhythmicity from whole genome expression 基因表达时钟:从全基因组表达预测昼夜节律的无监督深度学习方法
Neural Computing and Applications Pub Date : 2024-08-14 DOI: 10.1007/s00521-024-10316-w
Aram Ansary Ogholbake, Qiang Cheng
{"title":"Gene expression clock: an unsupervised deep learning approach for predicting circadian rhythmicity from whole genome expression","authors":"Aram Ansary Ogholbake, Qiang Cheng","doi":"10.1007/s00521-024-10316-w","DOIUrl":"https://doi.org/10.1007/s00521-024-10316-w","url":null,"abstract":"<p>Circadian rhythms are driven by an internal molecular clock which controls physiological and behavioral processes. Disruptions in these rhythms have been associated with health issues. Therefore, studying circadian rhythms is crucial for understanding physiology, behavior, and pathophysiology. However, it is challenging to study circadian rhythms over gene expression data, due to a scarcity of time labels. In this paper, we propose a novel approach to predict the phases of un-timed samples based on a deep neural network (DNN) architecture. This approach addresses two challenges: (1) prediction of sample phases and reliable identification of cyclic genes from high-dimensional expression data without relying on conserved circadian genes and (2) handling small sample-sized datasets. Our algorithm begins with initial gene screening to select candidate cyclic genes using a Minimum Distortion Embedding framework. This stage is then followed by greedy layer-wise pre-training of our DNN. Pre-training accomplishes two critical objectives: First, it initializes the hidden layers of our DNN model, enabling them to effectively capture features from the gene profiles with limited samples. Second, it provides suitable initial values for essential aspects of gene periodic oscillations. Subsequently, we fine-tune the pre-trained network to achieve precise sample phase predictions. Extensive experiments on both animal and human datasets show accurate and robust prediction of both sample phases and cyclic genes. Moreover, based on an Alzheimer’s disease (AD) dataset, we identify a set of hub genes that show significant oscillations in cognitively normal subjects but had disruptions in AD, as well as their potential therapeutic targets.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188406","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Hybrid-Mode tracker with online SA-LSTM updater 带有在线 SA-LSTM 更新器的混合模式跟踪器
Neural Computing and Applications Pub Date : 2024-08-14 DOI: 10.1007/s00521-024-10354-4
Hongsheng Zheng, Yun Gao, Yaqing Hu, Xuejie Zhang
{"title":"Hybrid-Mode tracker with online SA-LSTM updater","authors":"Hongsheng Zheng, Yun Gao, Yaqing Hu, Xuejie Zhang","doi":"10.1007/s00521-024-10354-4","DOIUrl":"https://doi.org/10.1007/s00521-024-10354-4","url":null,"abstract":"<p>The backbone network and target template are pivotal factors influencing the performance of Siamese trackers. However, traditional approaches encounter challenges in eliminating local redundancy and establishing global dependencies when learning visual data representations. While convolutional neural networks (CNNs) and vision transformers (ViTs) are commonly employed as backbones in Siamese-based trackers, each primarily addresses only one of these challenges. Furthermore, tracking is a dynamic process. Nonetheless, in many Siamese trackers, solely a fixed initial template is employed to facilitate target state matching. This approach often proves inadequate for effectively handling scenes characterized by target deformation, occlusion, and fast motion. In this paper, we propose a Hybrid-Mode Siamese tracker featuring an online SA-LSTM updater. Distinct learning operators are tailored to exploit characteristics at different depth levels of the backbone, integrating convolution and transformers to form a Hybrid-Mode backbone. This backbone efficiently learns global dependencies among input tokens while minimizing redundant computations in local domains, enhancing feature richness for target tracking. The online SA-LSTM updater comprehensively integrates spatial–temporal context during tracking, producing dynamic template features with enhanced representations of target appearance. Extensive experiments across multiple benchmark datasets, including GOT-10K, LaSOT, TrackingNet, OTB-100, UAV123, and NFS, demonstrate that the proposed method achieves outstanding performance, running at 35 FPS on a single GPU.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
HMedCaps: a new hybrid capsule network architecture for complex medical images HMedCaps:用于复杂医学图像的新型混合胶囊网络架构
Neural Computing and Applications Pub Date : 2024-08-14 DOI: 10.1007/s00521-024-10147-9
Sumeyra Busra Sengul, Ilker Ali Ozkan
{"title":"HMedCaps: a new hybrid capsule network architecture for complex medical images","authors":"Sumeyra Busra Sengul, Ilker Ali Ozkan","doi":"10.1007/s00521-024-10147-9","DOIUrl":"https://doi.org/10.1007/s00521-024-10147-9","url":null,"abstract":"<p>Recognizing and analyzing medical images is crucial for disease early detection and treatment planning with appropriate treatment options based on the patient's individual needs and disease history. Deep learning technologies are widely used in the field of healthcare because they can analyze images rapidly and precisely. However, because each object on the image has the potential to hold illness information in medical images, it is critical to analyze the images with minimal information loss. In this context, Capsule Network (CapsNet) architecture is an important approach that aims to reduce information loss by storing the location and properties of objects in images as capsules. However, because CapsNet maintains information on each object in the image, the existence of several objects in complicated images can impair CapsNet's performance. This work proposes a new model called HMedCaps to improve the performance of CapsNet. In the proposed model, it is aimed to develop a deeper and hybrid structure by using Residual Block and FractalNet module together in the feature extraction layer. While it is aimed to obtain rich feature maps by increasing the number of features extracted by deepening the network, it is aimed to prevent the vanishing gradient problem that may occur in the network with increasing depth with these modules with skip connections. Furthermore, a new squash function is proposed to make distinctive capsules more prominent by customizing capsule activation. The CIFAR10 dataset of complex images, RFMiD dataset of retinal images, and Blood Cell Count Dataset dataset of blood cell images were used to evaluate the study. When the proposed model was compared with the basic CapsNet and studies in the literature, it was observed that the performance in complex images was improved and more accurate classification results were obtained in the field of medical image analysis. The proposed hybrid HMedCaps architecture has the potential to make more accurate diagnoses in the field of medical image analysis.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
YOLOv7 for brain tumour detection using morphological transfer learning model 利用形态学迁移学习模型检测脑肿瘤的 YOLOv7
Neural Computing and Applications Pub Date : 2024-08-12 DOI: 10.1007/s00521-024-10246-7
Sanat Kumar Pandey, Ashish Kumar Bhandari
{"title":"YOLOv7 for brain tumour detection using morphological transfer learning model","authors":"Sanat Kumar Pandey, Ashish Kumar Bhandari","doi":"10.1007/s00521-024-10246-7","DOIUrl":"https://doi.org/10.1007/s00521-024-10246-7","url":null,"abstract":"<p>An accurate diagnosis of a brain tumour in its early stages is required to improve the possibility of survival for cancer patients. Due to the structural complexity of the brain, it has become very difficult and tedious for neurologists and radiologists to diagnose brain tumours in the initial stages with the help of various common manual approaches to tumour diagnosis. To improve the performance of the diagnosis, some computer-aided diagnosis-based systems are developed with the concepts of artificial intelligence. In this proposed manuscript, we analyse various computer-aided design (CAD)-based approaches and design a modern approach with ideas of transfer learning over deep learning on magnetic resonance imaging (MRI). In this study, we apply a transfer learning approach with the object detection model YOLO (You Only Look Once) and analyse the MRI dataset with the various modified versions of YOLO. After the analysis, we propose an object detection model based on the modified YOLOv7 with a morphological filtering approach to reach an efficient and accurate diagnosis. To enhance the performance accuracy of this suggested model, we also analyse the various versions of YOLOv7 models and find that the proposed model having the YOLOv7-E6E object detection technique gives the optimum value of performance indicators as precision, recall, F1, and mAP@50 as 1, 0.92, 0.958333, and 0.974, respectively. The value of mAP@50 improves to 0.992 by introducing a morphological filtering approach before the object detection technique. During the complete analysis of the suggested model, we use the BraTS 2021 dataset. The BraTS 2021 dataset has brain MR images from the RSNA-MICCAI brain tumour radiogenetic competition, and the complete dataset is labelled using the online tool MakeSense AI.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141933107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing CNN model classification performance through RGB angle rotation method 通过 RGB 角度旋转方法提高 CNN 模型分类性能
Neural Computing and Applications Pub Date : 2024-08-12 DOI: 10.1007/s00521-024-10232-z
Yahya Dogan, Cuneyt Ozdemir, Yılmaz Kaya
{"title":"Enhancing CNN model classification performance through RGB angle rotation method","authors":"Yahya Dogan, Cuneyt Ozdemir, Yılmaz Kaya","doi":"10.1007/s00521-024-10232-z","DOIUrl":"https://doi.org/10.1007/s00521-024-10232-z","url":null,"abstract":"<p>In recent years, convolutional neural networks have significantly advanced the field of computer vision by automatically extracting features from image data. CNNs enable the modeling of complex and abstract image features using learnable filters, eliminating the need for manual feature extraction. However, combining feature maps obtained from CNNs with different approaches can lead to more complex and interpretable inferences, thereby enhancing model performance and generalizability. In this study, we propose a new method called RGB angle rotation to effectively obtain feature maps from RGB images. Our method rotates color channels at different angles and uses the angle information between channels to generate new feature maps. We then investigate the effects of integrating models trained with these feature maps into an ensemble architecture. Experimental results on the CIFAR-10 dataset show that using the proposed method in the ensemble model results in performance increases of 9.10 and 8.42% for the B and R channels, respectively, compared to the original model, while the effect of the G channel is very limited. For the CIFAR-100 dataset, the proposed method resulted in a 17.09% improvement in ensemble model performance for the R channel, a 5.06% increase for the B channel, and no significant improvement for the G channel compared to the original model. Additionally, we compared our method with traditional feature extraction methods like scale-invariant feature transform and local binary pattern and observed higher performance. In conclusion, it has been observed that the proposed RGB angle rotation method significantly impacts model performance.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Privacy-preserving hierarchical federated learning with biosignals to detect drowsiness while driving 利用生物信号进行隐私保护分层联合学习,以检测驾驶时的嗜睡状态
Neural Computing and Applications Pub Date : 2024-08-12 DOI: 10.1007/s00521-024-10282-3
Sergio López Bernal, José Manuel Hidalgo Rogel, Enrique Tomás Martínez Beltrán, Mario Quiles Pérez, Gregorio Martínez Pérez, Alberto Huertas Celdrán
{"title":"Privacy-preserving hierarchical federated learning with biosignals to detect drowsiness while driving","authors":"Sergio López Bernal, José Manuel Hidalgo Rogel, Enrique Tomás Martínez Beltrán, Mario Quiles Pérez, Gregorio Martínez Pérez, Alberto Huertas Celdrán","doi":"10.1007/s00521-024-10282-3","DOIUrl":"https://doi.org/10.1007/s00521-024-10282-3","url":null,"abstract":"<p>In response to the global safety concern of drowsiness during driving, the European Union enforces that new vehicles must integrate detection systems compliant with the general data protection regulation. To identify drowsiness patterns while preserving drivers’ data privacy, recent literature has combined Federated Learning (FL) with different biosignals, such as facial expressions, heart rate, electroencephalography (EEG), or electrooculography (EOG). However, existing solutions are unsuitable for drowsiness detection where heterogeneous stakeholders want to collaborate at different levels while guaranteeing data privacy. There is a lack of works evaluating the benefits of using Hierarchical FL (HFL) with EEG and EOG biosignals, and comparing HFL over traditional FL and Machine Learning (ML) approaches to detect drowsiness at the wheel while ensuring data confidentiality. Thus, this work proposes a flexible framework for drowsiness identification by using HFL, FL, and ML over EEG and EOG data. To validate the framework, this work defines a scenario of three transportation companies aiming to share data from their drivers without compromising their confidentiality, defining a two-level hierarchical structure. This study presents three incremental Use Cases (UCs) to assess detection performance: UC1) intra-company FL, yielding a 77.3% accuracy while ensuring the privacy of individual drivers’ data; UC2) inter-company FL, achieving 71.7% accuracy for known drivers and 67.1% for new subjects, ensuring data confidentiality between companies but not intra-organization; and UC3) HFL inter-company, which ensured comprehensive data privacy both within and between companies, with an accuracy of 71.9% for training subjects and 65.5% for new subjects.</p>","PeriodicalId":18925,"journal":{"name":"Neural Computing and Applications","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142188407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信