A Healthcare System for detecting Stress from ECG signals and improving the human emotional

Madhavikatamaneni, Riya K S, Anvar Shathik J, K. PoornaPushkala
{"title":"A Healthcare System for detecting Stress from ECG signals and improving the human emotional","authors":"Madhavikatamaneni, Riya K S, Anvar Shathik J, K. PoornaPushkala","doi":"10.1109/ICACTA54488.2022.9753564","DOIUrl":null,"url":null,"abstract":"A strategy for communicating with another person that, if done correctly, maybe easily be understood or accepted by the other person. There are a variety of alternative modes of communication available, including visual representation, body language, conversation, written language, among others. Currently, speech recognition is evolving as a powerful technology in today's world, with applications in a wide range of areas requiring specialised hardware. Voice has a wide range of applications and is frequently regarded as the most powerful mode of communication among all other technologies. The attitude, health status, emotion, gender, and speaker's identity are all considered part of the rich dimension, also known as the rich dimension of communication. Gender and emotion are the significant components of this framework for voice recognition, and they are taken into consideration for a number of applications in this framework for voice recognition. We want to demonstrate an emotion detection system that uses a speech signal as its main input to identify various emotions with this framework. We offer a unique approach for emotion recognition from speech input that uses Artificial Neural Networks (ANN) and is implemented on a Field Programmable Gate Array device (FPGA). In this scenario, the back propagation technique underneath the ANN is utilised as a classifier in the emotion identification system. The emotions are categorised based on their intensity using this approach. Speech pre-processing, feature extraction, and classification are the proposed work's major processing stages. Here, during the features extraction process, characteristics from the data are recovered, such as Cepstrum, Pitch, Mel-frequency cepstral coefficients (MFCC), and the Discrete Wavelet Transform (DWT). In addition, the method of back propagation neural networks is used to achieve the classification task—the proposed work outcomes with the 91.235% accuracy with the less error rate.","PeriodicalId":345370,"journal":{"name":"2022 International Conference on Advanced Computing Technologies and Applications (ICACTA)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-03-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 International Conference on Advanced Computing Technologies and Applications (ICACTA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICACTA54488.2022.9753564","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

A strategy for communicating with another person that, if done correctly, maybe easily be understood or accepted by the other person. There are a variety of alternative modes of communication available, including visual representation, body language, conversation, written language, among others. Currently, speech recognition is evolving as a powerful technology in today's world, with applications in a wide range of areas requiring specialised hardware. Voice has a wide range of applications and is frequently regarded as the most powerful mode of communication among all other technologies. The attitude, health status, emotion, gender, and speaker's identity are all considered part of the rich dimension, also known as the rich dimension of communication. Gender and emotion are the significant components of this framework for voice recognition, and they are taken into consideration for a number of applications in this framework for voice recognition. We want to demonstrate an emotion detection system that uses a speech signal as its main input to identify various emotions with this framework. We offer a unique approach for emotion recognition from speech input that uses Artificial Neural Networks (ANN) and is implemented on a Field Programmable Gate Array device (FPGA). In this scenario, the back propagation technique underneath the ANN is utilised as a classifier in the emotion identification system. The emotions are categorised based on their intensity using this approach. Speech pre-processing, feature extraction, and classification are the proposed work's major processing stages. Here, during the features extraction process, characteristics from the data are recovered, such as Cepstrum, Pitch, Mel-frequency cepstral coefficients (MFCC), and the Discrete Wavelet Transform (DWT). In addition, the method of back propagation neural networks is used to achieve the classification task—the proposed work outcomes with the 91.235% accuracy with the less error rate.
一种从心电图信号中检测压力并改善人类情绪的医疗保健系统
一种与他人沟通的策略,如果使用得当,可能很容易被他人理解或接受。有多种可供选择的交流方式,包括视觉表现、肢体语言、对话、书面语言等。目前,语音识别正在发展成为当今世界的一项强大技术,其在广泛领域的应用需要专门的硬件。语音具有广泛的应用,并且经常被认为是所有其他技术中最强大的通信模式。态度、健康状况、情感、性别、说话人身份等都被认为是交际的丰富维度,也被称为交际的丰富维度。性别和情感是该语音识别框架的重要组成部分,并且在该语音识别框架的许多应用中都考虑了性别和情感。我们想展示一个情感检测系统,它使用语音信号作为主要输入,用这个框架识别各种情绪。我们提供了一种独特的方法来识别语音输入的情绪,该方法使用人工神经网络(ANN),并在现场可编程门阵列设备(FPGA)上实现。在这种情况下,人工神经网络下的反向传播技术被用作情感识别系统中的分类器。使用这种方法,根据情绪的强度对其进行分类。语音预处理、特征提取和分类是该工作的主要处理阶段。在特征提取过程中,从数据中恢复特征,如倒谱、基音、Mel-frequency倒谱系数(MFCC)和离散小波变换(DWT)。此外,利用神经网络的反向传播方法实现了分类任务-提出的工作结果,准确率达到91.235%,错误率较小。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
自引率
0.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信