{"title":"Investigation of bio-mechanical indicators to explore the performance of mental preparation","authors":"S. B. Jebara, G. Marco","doi":"10.1109/ATSIP49331.2020.9231904","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231904","url":null,"abstract":"The research has been carried out in order to understand if mental preparation (motor imagery) influences (increases) force level control, by viewing the achieved force level in accordance with the one required by the task. To do it, thirty six young healthy subjects performed isometric forearm contractions at 3 force levels (low, medium, high) of their maximal voluntary contraction. Half of the subjects have mentally prepared their motion (motor imagery) before to perform the exercise and the other half have performed the exercise without motor imagery (control group). The work is composed of three parts. The first one was devoted to force signal processing: baseline rectification, denoising and features extraction. The second part was devoted to unsupervised classification which aims re-organizing the realized force levels to three real separated classes. The third part aimed to provide results of force level self-control and regulation. We mainly show that mental preparation increase the performance the force level respect.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"612 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116397417","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Amira Echtioui, W. Zouch, M. Ghorbel, M. Slima, A. Hamida, C. Mhiri
{"title":"Automated EEG Artifact Detection Using Independent Component Analysis","authors":"Amira Echtioui, W. Zouch, M. Ghorbel, M. Slima, A. Hamida, C. Mhiri","doi":"10.1109/ATSIP49331.2020.9231574","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231574","url":null,"abstract":"In electroencephalogram (EEG) recordings, physiological and non-physiological artifacts pose many problems. Independent Component Analysis (ICA) is a widely used algorithm for removing different artifacts from EEG signals. It separates data in linearly Independent Components (IC). However, the evaluation and classification of the calculated ICs as an EEG or artifact is not currently automated which requires manual intervention to reject ICs with visually detected artifacts after decomposition. In this paper, we propose a new automated approach for artifacts detection using the ICA algorithm. The best result of mean square error was achieved using SOBI-ICA (Second Order Blind Identification) and ADJUST algorithms. Compared with the existing automated solutions, our approach is not limited to electrode configurations, number of EEG channels, or specific types of artifacts. It provides a practical tool, reliable, automatic, and real-time capable, which avoids the need for the time-consuming manual selection of ICs during artifacts rejection.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130560713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Detecting Abnormal ventricular Contractility from Radionuclide Ventriculography Images","authors":"Halima Dziri, M. A. Cherni, D. Sellem","doi":"10.1109/ATSIP49331.2020.9231823","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231823","url":null,"abstract":"The evaluation of ventricular contractility is an important factor helping doctors in choosing the appropriate therapy for abnormal heart functioning occurring at the level of contractilities such as hypokinesia, diskinesia, and akinesia. Applying covariance analysis to track the ventricular contractility is the purpose of this study. We examined the proposed method on radionuclide ventriculography images corresponding to 44 patients with abnormal contractility and 6 with normal contractility. The Experimental results of this study show that the proposed method has a good reproducibility in the assessment of the ventricular function (accuracy=0.94, sensitivity=0.94,specificity= 1 and AUC =0.97).","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"105 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123406499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic segmentation of medical images using convolutional neural networks","authors":"Sourour Mesbahi, Hedi Yazid","doi":"10.1109/ATSIP49331.2020.9231669","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231669","url":null,"abstract":"This paper presents a neural network architecture for segmentation of medical images. We have chosen to test and implement various Convolutional Neural Network (CNN). We chose to apply this work on a topic of cerebral images segmentation containing brain tumors. The main objective is to choose the best architecture and parameterization applied into a task of a MRI brain tumor while treating a small database. Segmentation and learning assessment tests show good performance using our personalized CNN architecture.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124672055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Laughter synthesis: A comparison between Variational autoencoder and Autoencoder","authors":"Nadia Mansouri, Z. Lachiri","doi":"10.1109/ATSIP49331.2020.9231607","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231607","url":null,"abstract":"Laughter is one of the most famous non verbal sounds that human produce since birth, it conveys messages about our emotional state. These characteristics make it an important sound that should be studied in order to improve the human-machine interactions. In this paper we investigate the audio laughter generation process from its acoustic features. This suggested process is considered as an analysis-transformation synthesis benchmark based on unsupervised dimensionality reduction techniques: The standard autoencoder (AE) and the variational autoencoder (VAE). Therefore, the laughter synthesis methodology consists of transforming the extracted high-dimensional log magnitude spectrogram into a low-dimensional latent vector. This latent vector contains the most valuable information used to reconstruct a synthetic magnitude spectrogram that will be passed through a specific vocoder to generate the laughter waveform. We systematically, exploit the VAE to create new sound (speech-laugh) based on the interpolation process. To evaluate the performance of these models two evaluation metrics were conducted: objective and subjective evaluations.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126675585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Hedi Choura, T. Frikha, M. Baklouti, Faten Chaabane
{"title":"Embedded Landmark implementation for Deep Learning pre-processing","authors":"Hedi Choura, T. Frikha, M. Baklouti, Faten Chaabane","doi":"10.1109/ATSIP49331.2020.9231803","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231803","url":null,"abstract":"Due to the evolution of information technology, it is becoming increasingly easy to use new platforms in order to set up efficient systems that are well adapted to the expected needs. As part of improving security and facilitating the detection of potentially dangerous persons, an intelligent application for on-board facial recognition is being developed. It is within this framework that we propose this paper. The objective of the proposed work is twofold. On the one hand, we propose to develop a module for the detection of relevant facial characteristics, which is the first step of an intelligent video surveillance application. Based on the detection of points of interest of the Landmark algorithm, a software optimization of the work is proposed. On the other hand, this application will be decomposed in order to be embedded on a multiprocessor architecture. In order to validate the multiprocessor-based approach, a comparison with other existing powerful processor architectures will allow to validate the best approach. This work will be the input for an intelligent embedded face detection application based on Machine Learning.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128177365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Automatic Recognition of Epileptiform EEG Abnormalities Using Machine Learning Approaches","authors":"Itaf Ben Slimen, H. Seddik","doi":"10.1109/ATSIP49331.2020.9231743","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231743","url":null,"abstract":"Epilepsy is one of the various neurological disorders with 1% of the world population. It is characterized by the anomalous of a large number of neurons. In this paper, a proposed automated system for seizure detection and diagnosis using EEG signals records. Seizures periods are generally characterized by epileptiform discharges with different changes including spike rate variation according to the shapes, spikes and the amplitude. The epileptiform is used as an indicator to anticipate the EEG signal class using machine learning methods. Based on EEG characterizes the proposed approach achieves a perfect classification rates with 99.8% using the Bonn database.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114237615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohamed Amine Hmani, Aymen Mtibaa, D. Petrovska-Delacrétaz, Claude Bauzou, Iacob Crucianu
{"title":"Evaluation of the H2020 SpeechXRays project Cancelable Face System Under the Framework of ISO/IEC 24745:2011","authors":"Mohamed Amine Hmani, Aymen Mtibaa, D. Petrovska-Delacrétaz, Claude Bauzou, Iacob Crucianu","doi":"10.1109/ATSIP49331.2020.9231763","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231763","url":null,"abstract":"Thanks to the recent advances in deep learning and the availability of big datasets, biometric systems boast of having high performance. However, these systems suffer from two main shortcomings, non-revocability, and vulnerability to biometric spoofing. Due to the GDPR, it has become increasingly important to have tools and methods to protect the privacy of the users. The H2020 SpeechXRays project aims to achieve this privacy requirement by implementing a cancelable biometric system. Using a shuffling transformation on the binary embeddings extracted from face images combined with a shuffling key, the users templates are made cancelable and unlinkable to the users in the same time. We explain how the system follows the ISO/IEC 24745:2011 compliance recommendation, and we report its performance and evaluate its properties following the ISO standardized metrics, notably the system irreversibility and its unlinkability. When working under ideal circumstances (the second factor is not stolen), the system gives 100% accuracy on the MOBIO dataset. Moreover, it is fully unlinkable and it is computationally infeasible to recover the original template without the second factor.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115992422","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Adding dimensional features for emotion recognition on speech","authors":"Leila Ben Letaifa, M. I. Torres, R. Justo","doi":"10.1109/ATSIP49331.2020.9231766","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231766","url":null,"abstract":"Developing accurate emotion recognition systems requires extracting suitable features of these emotions. In this paper, we propose an original approach of parameters extraction based on the strong, theoretical and empirical, correlation between the emotion categories and the dimensional emotions parameters. More precisely, acoustic features and dimensional emotion parameters are combined for better speech emotion characterisation. The procedure consists in developing arousal and valence models by regression on the training data and estimating, by classification, their values in the test data. Hence, when classifying an unknown sample into emotion categories, these estimations could be integrated into the feature vectors. It is noted that the results using this new set of parameters show a significant improvement of the speech emotion recognition performance.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123868859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jinen Daghrir, Lotfi Tlig, M. Bouchouicha, M. Sayadi
{"title":"Melanoma skin cancer detection using deep learning and classical machine learning techniques: A hybrid approach","authors":"Jinen Daghrir, Lotfi Tlig, M. Bouchouicha, M. Sayadi","doi":"10.1109/ATSIP49331.2020.9231544","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231544","url":null,"abstract":"Melanoma is considered as one of the fatal cancer in the world, this form of skin cancer may spread to other parts of the body in case that it has not been diagnosed in an early stage. Thus, the medical field has known a great evolution with the use of automated diagnosis systems that can help doctors and even normal people to determine a certain kind of disease. In this matter, we introduce a hybrid method for melanoma skin cancer detection that can be used to examine any suspicious lesion. Our proposed system rely on the prediction of three different methods: A convolutional neural network and two classical machine learning classifiers trained with a set of features describing the borders, texture and the color of a skin lesion. These methods are then combined to improve their performances using majority voting. The experiments have shown that using the three methods together, gives the highest accuracy level.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124771910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}