{"title":"ATSIP 2020 Cover Page","authors":"","doi":"10.1109/atsip49331.2020.9231647","DOIUrl":"https://doi.org/10.1109/atsip49331.2020.9231647","url":null,"abstract":"","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123208833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Medication Code Recognition using Convolutional Neural Network","authors":"A. Zaafouri, M. Sayadi","doi":"10.1109/ATSIP49331.2020.9231801","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231801","url":null,"abstract":"In this paper, a new automatic method for expiration code of medical products using convolutional neural network (CNN) is presented. The input image is enhanced using unsharp masking method. Then the image is binarized using local adaptive thresholding technique (LATT) and thinned using morphological operator. Also, characters of the image are extracted using bounding box technique. Finally, a set of characters (A-Z) and digits (0-9) is boiled. The dataset of characters feed an adopted architecture of CNN in order to recognize expiration code of the medication. The proposed approach is tested on large datasets of characters under various conditions of complexities. The experimental results demonstrate the robustness of our approach. The developed system achieves approximately 93% accuracy on character recognition.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121253379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Nasri, Mohamed Amine Hmani, Aymen Mtibaa, D. Petrovska-Delacrétaz, M. Slima, A. Hamida
{"title":"Face Emotion Recognition From Static Image Based on Convolution Neural Networks","authors":"M. Nasri, Mohamed Amine Hmani, Aymen Mtibaa, D. Petrovska-Delacrétaz, M. Slima, A. Hamida","doi":"10.1109/ATSIP49331.2020.9231537","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231537","url":null,"abstract":"Human-Machine Interaction systems have not yet reached all the emotional and social capacities. In this paper, we propose a face emotion recognition system from static image based on the Xception convolution neural network architecture and the K-fold-cross-validation strategy. The proposed system was improved using the fine-tuning method. The Xception model pre-trained on ImageNet database for objects recognition was fine-tuned to recognize seven emotional states. The proposed system is evaluated on the database recorded during the Empathic project and the AffectNet database. Our experimental results achieve an accuracy of 62%, 69% on Empathic and AffectNet databases respectively using the fine-tuning strategy. Combined the AffectNet and Empathic databases to train our proposed model, show significant improvement in the emotion recognition that achieves an accuracy of 91.2% on Empathic database.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125929969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Classification of DNA Microarrays Using Deep Learning to identify Cell Cycle Regulated Genes","authors":"Hiba Lahmer, A. Oueslati, Z. Lachiri","doi":"10.1109/ATSIP49331.2020.9231888","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231888","url":null,"abstract":"The aim of this work is to take advantage of the power and growth of machine learning methods and deep learning algorithms in the biomedical field, and how to use it to predict and recognize repetitive patterns. The ultimate goal is to analyze the large amount set of data produced from the DNA (Deoxyribonucleic acid) microarrays technology. We can use this data to extract facts, information, and skills, such as gene expression level. Our target here is to classify two genes’ types. The first represents cell cycle regulated genes and the second represents the non-cell cycle ones. For the classification purpose, we preprocess the data, and we implement deep learning models. Then we evaluate our approach and compare its precision with Liu and al results. In the literature, the latest approaches are depending on processing the numerical data related to the DNA microarrays genes progression. In our work, we adopt a novel approach using directly the Microarrays image data. We use the Convolutional Neural Network and the fully connected neural network algorithms, to classify our processed image data. The experiments demonstrate that our approach outperforms the state of art by a margin of 20 per cent. Our model accomplishes real time test accuracy of ~ 92.39 % at classifying.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131648160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haifa Touati, Boughariou Jihene, Sellemi Lamia, Ben Hamida Ahmed
{"title":"e-ASPECTS for early detection and diagnosis of ischemic stroke","authors":"Haifa Touati, Boughariou Jihene, Sellemi Lamia, Ben Hamida Ahmed","doi":"10.1109/ATSIP49331.2020.9231788","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231788","url":null,"abstract":"Ischemic Stroke is one of the major causes of human death and disability. So early stroke diagnosis is vital for patient’s survival and is remained as challenge for neurophysicians. The electronic Alberta Stroke Program Early CT Score (e-ASPECTS) software is widely used by neurophysician to assess the extent of early ischemic changes in brain imaging for acute stroke treatment. However, despite its efficiency, e-ASPECTS suffer of some limitations because of non-contrast computed-tomography scans. This study aims to present the e-ASPECTS program limits of through a literature review of recent studies that compare between automatic performance and human performance at the level of stroke detection and assessment. The present paper highlights the recently developed Artificial-Intelligent based tools in diagnosis and treatment of stroke with a comparison with humain score.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129166014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Blurred Image Detection In Drone Embedded System","authors":"Ratiba Gueraichi, A. Serir","doi":"10.1109/ATSIP49331.2020.9231665","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231665","url":null,"abstract":"This paper deals with the detection of blurred images that may eventually be captured by a drone. The embedded system should be able to measure the amount of blur affecting the images in order to decide whether to acquire the scene again or not. For this purpose, we have developed a simple model based on Discrete Cosine Transform (DCT) associated to Support Vector Machine Classifier SVM, to classify images into three categories and thus detect strongly, moderately and slightly blurred images. The proposed system has been tested on 550 images captured by a drone. The obtained results are very conclusive.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126015343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Houda Khmila, I. Kallel, Sami Barhoumi, N. Smaoui, H. Derbel
{"title":"Fast pore matching method based on core point alignment and orientation","authors":"Houda Khmila, I. Kallel, Sami Barhoumi, N. Smaoui, H. Derbel","doi":"10.1109/ATSIP49331.2020.9231829","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231829","url":null,"abstract":"Nowadays, high-resolution fingerprint images are more and more used in the fingerprint recognition systems thanks to the recognition accuracy that they provide. Indeed, they offer more sufficient details such as sweat pores, ridges, contours, and other details. Pores have been adopted to be one of the brilliant nominees in improving the efficiency of automated fingerprint identification systems to maintain a high level of security. However, the geometric transformations, that occur during the acquisition phase, can cause several defects on the result of the matching process, hence they decline the accuracy of the recognition. To overcome this problem, alignment is often needed. This image pretreatment is classically based on complex geometric operations that are time-consuming. Otherwise, for pore matching, the majority of approaches are based only on pore coordinates. In this paper, we propose a novel pore matching method based, firstly, on only one of the singular points, namely the core points for the alignment phase, and also the valuable features used for the score calculation namely position and the orientation of pores. We assess our proposed approach using the PolyU-HRF database and we compare it to some well-known approaches of level 3 fingerprint recognition. The experimental results demonstrate that the proposed method can achieve significant performance recognition accuracy across various qualities of fingerprint images.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125154389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Belabed, M. G. Coutinho, Marcelo A. C. Fernandes, C. Valderrama, C. Souani
{"title":"Low Cost and Low Power Stacked Sparse Autoencoder Hardware Acceleration for Deep Learning Edge Computing Applications","authors":"T. Belabed, M. G. Coutinho, Marcelo A. C. Fernandes, C. Valderrama, C. Souani","doi":"10.1109/ATSIP49331.2020.9231748","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231748","url":null,"abstract":"Nowadays, Deep Learning DL becoming more and more interesting in many areas, such as genomics, security, data analysis, image, and video processing. However, DL requires more and more powerful and parallel computing. The calculation performed by super-machines equipped with powerful processors, such as the latest GPUs. Despite their power, these computing units consume a lot of energy, which makes their use very difficult in small embedded systems and edge computing. To overcome the problem for which we must keep the maximum performance and satisfy the power constraint, it is necessary to use a heterogeneous strategy. Some solutions are promising when using less energyconsuming electronic circuits, such as FPGAs associated with less expensive topologies such as Stacked Sparse Autoencoders. Our target architecture is the Xilinx ZYNQ 7020 SoC, which combines a dual-core ARM processor and an FPGA in the same chip. In the interest of flexibility, we decided to leverage the performance of Xilinx’s high-level synthesis tools, evaluate and choose the best solution in terms of size and performance of the data exchange, synchronization and pipeline processing. The results show that our implementation gives high performance at very low energy consumption. Indeed, the evaluation of our accelerator shows that it can classify 1160 MNIST images per second, consuming only 0.443 W; 2.4 W for the entire system. More than the low energy consumption and the high performance, the platform used only costs $ 125.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134421327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marwa Chakroun, Amal Charfi, Sonda Ammar Bouhamed, I. Kallel, B. Solaiman, H. Derbel
{"title":"Binary hierarchical multiclass classifier for uncertain numerical features","authors":"Marwa Chakroun, Amal Charfi, Sonda Ammar Bouhamed, I. Kallel, B. Solaiman, H. Derbel","doi":"10.1109/ATSIP49331.2020.9231804","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231804","url":null,"abstract":"Real-world multiclass classification problems involve moderately high dimensional inputs with a large number of class labels. As well, for most real-world applications, uncertainty has to be handled carefully, unless the classification results could be inaccurate or even incorrect. In this paper, we investigate a binary hierarchical partitioning of the output space in an uncertain framework to overcome these limitations and yield better solutions. Uncertainty is modeled within the quantitative possibility theory framework. Experimentations on real ultrasonic dataset show good performances of the proposed multiclass classifier. An accuracy rate of 93% has been achieved.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132860738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Identification of the user by using a hardware device","authors":"H. Hamam","doi":"10.1109/ATSIP49331.2020.9231602","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231602","url":null,"abstract":"In real life, people treat with their interlocutors face-to-face. In virtual life, our interlocutors are behind the walls of internet, and we do not whether they are human beings or programs. Thus, an issue of identity rises. Special attention is given to on-line banking since it is a delicate issue. We propose a hybrid software/hardware solution to overcome this problem of identity identification. The bank provides the client with a hardware device including a set of passwords. Each password is valid for only one on-line transaction. If a password is intercepted by an unauthorized person then it is useless. The password is entered by a device with a USB connector after a validation of the identity through fingerprints or other biometric measures. The concept has been validated by designing a USB card including a fingerprint reader.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114448644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}