2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)最新文献

筛选
英文 中文
e-ASPECTS for early detection and diagnosis of ischemic stroke e-ASPECTS在缺血性脑卒中早期发现和诊断中的应用
Haifa Touati, Boughariou Jihene, Sellemi Lamia, Ben Hamida Ahmed
{"title":"e-ASPECTS for early detection and diagnosis of ischemic stroke","authors":"Haifa Touati, Boughariou Jihene, Sellemi Lamia, Ben Hamida Ahmed","doi":"10.1109/ATSIP49331.2020.9231788","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231788","url":null,"abstract":"Ischemic Stroke is one of the major causes of human death and disability. So early stroke diagnosis is vital for patient’s survival and is remained as challenge for neurophysicians. The electronic Alberta Stroke Program Early CT Score (e-ASPECTS) software is widely used by neurophysician to assess the extent of early ischemic changes in brain imaging for acute stroke treatment. However, despite its efficiency, e-ASPECTS suffer of some limitations because of non-contrast computed-tomography scans. This study aims to present the e-ASPECTS program limits of through a literature review of recent studies that compare between automatic performance and human performance at the level of stroke detection and assessment. The present paper highlights the recently developed Artificial-Intelligent based tools in diagnosis and treatment of stroke with a comparison with humain score.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129166014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Subjective evaluation of approximate Discrete Sine Transform for the Versatile Video Coding standard 通用视频编码标准中近似离散正弦变换的主观评价
S. B. Jdidia, M. Amor, Fatma Belghith, N. Masmoudi
{"title":"Subjective evaluation of approximate Discrete Sine Transform for the Versatile Video Coding standard","authors":"S. B. Jdidia, M. Amor, Fatma Belghith, N. Masmoudi","doi":"10.1109/ATSIP49331.2020.9231720","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231720","url":null,"abstract":"For the past few years, a Joint Video Exploration Team (JVET) has been working to establish a new compression standard known as Versatile Video Coding (VVC). Several coding tools have been introduced. The Adaptive Multiple Transform (AMT), used as a residual transform coding, involves five discrete transforms that belong to cosine (DCT) and Sine (DST) families. However, transform computation using direct matrix multiplication requires much hardware resources and yields high run-time complexity. Particularly the DST-VII, frequently used by the transform module, relies on complex matrix multiplication. This results in a significant issue while implementing it in hardware. Therefore, DST-VII approximation was addressed as an efficient solution to reduce the operation count and the execution time, meanwhile preserve almost a similar coding performance to the exact transform. This paper deals with objective and perceptual assessment of the proposed transform using the Peak Signal to Noise Ratio (PSNR) and the Global Score Reduced reference Video quality assessment based on Human visual system $(GS_{RVH})$ metrics respectively.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115002249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Face Emotion Recognition From Static Image Based on Convolution Neural Networks 基于卷积神经网络的静态图像人脸情绪识别
M. Nasri, Mohamed Amine Hmani, Aymen Mtibaa, D. Petrovska-Delacrétaz, M. Slima, A. Hamida
{"title":"Face Emotion Recognition From Static Image Based on Convolution Neural Networks","authors":"M. Nasri, Mohamed Amine Hmani, Aymen Mtibaa, D. Petrovska-Delacrétaz, M. Slima, A. Hamida","doi":"10.1109/ATSIP49331.2020.9231537","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231537","url":null,"abstract":"Human-Machine Interaction systems have not yet reached all the emotional and social capacities. In this paper, we propose a face emotion recognition system from static image based on the Xception convolution neural network architecture and the K-fold-cross-validation strategy. The proposed system was improved using the fine-tuning method. The Xception model pre-trained on ImageNet database for objects recognition was fine-tuned to recognize seven emotional states. The proposed system is evaluated on the database recorded during the Empathic project and the AffectNet database. Our experimental results achieve an accuracy of 62%, 69% on Empathic and AffectNet databases respectively using the fine-tuning strategy. Combined the AffectNet and Empathic databases to train our proposed model, show significant improvement in the emotion recognition that achieves an accuracy of 91.2% on Empathic database.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125929969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
A Clinical support system for Prediction of Heart Disease using Machine Learning Techniques 使用机器学习技术预测心脏病的临床支持系统
Halima El Hamdaoui, S. Boujraf, N. Chaoui, M. Maaroufi
{"title":"A Clinical support system for Prediction of Heart Disease using Machine Learning Techniques","authors":"Halima El Hamdaoui, S. Boujraf, N. Chaoui, M. Maaroufi","doi":"10.1109/ATSIP49331.2020.9231760","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231760","url":null,"abstract":"Heart disease is a leading cause of death worldwide. However, it remains difficult for clinicians to predict heart disease as it is a complex and costly task. Hence, we proposed a clinical support system for predicting heart disease to help clinicians with diagnostic and make better decisions. Machine learning algorithms such as Naïve Bayes, K-Nearest Neighbor, Support Vector Machine, Random Forest, and Decision Tree are applied in this study for predicting Heart Disease using risk factors data retrieved from medical files. Several experiments have been conducted to predict HD using the UCI data set, and the outcome reveals that Naïve Bayes outperforms using both cross-validation and train-test split techniques with an accuracy of 82.17%, 84.28%, respectively. The second conclusion is that the accuracy of all algorithm decrease after applying the cross-validation technique. Finally, we suggested multi validation techniques in prospectively collected data towards the approval of the proposed approach.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"122 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128159410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Classification of DNA Microarrays Using Deep Learning to identify Cell Cycle Regulated Genes DNA微阵列分类使用深度学习识别细胞周期调控基因
Hiba Lahmer, A. Oueslati, Z. Lachiri
{"title":"Classification of DNA Microarrays Using Deep Learning to identify Cell Cycle Regulated Genes","authors":"Hiba Lahmer, A. Oueslati, Z. Lachiri","doi":"10.1109/ATSIP49331.2020.9231888","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231888","url":null,"abstract":"The aim of this work is to take advantage of the power and growth of machine learning methods and deep learning algorithms in the biomedical field, and how to use it to predict and recognize repetitive patterns. The ultimate goal is to analyze the large amount set of data produced from the DNA (Deoxyribonucleic acid) microarrays technology. We can use this data to extract facts, information, and skills, such as gene expression level. Our target here is to classify two genes’ types. The first represents cell cycle regulated genes and the second represents the non-cell cycle ones. For the classification purpose, we preprocess the data, and we implement deep learning models. Then we evaluate our approach and compare its precision with Liu and al results. In the literature, the latest approaches are depending on processing the numerical data related to the DNA microarrays genes progression. In our work, we adopt a novel approach using directly the Microarrays image data. We use the Convolutional Neural Network and the fully connected neural network algorithms, to classify our processed image data. The experiments demonstrate that our approach outperforms the state of art by a margin of 20 per cent. Our model accomplishes real time test accuracy of ~ 92.39 % at classifying.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131648160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Blurred Image Detection In Drone Embedded System 无人机嵌入式系统中的模糊图像检测
Ratiba Gueraichi, A. Serir
{"title":"Blurred Image Detection In Drone Embedded System","authors":"Ratiba Gueraichi, A. Serir","doi":"10.1109/ATSIP49331.2020.9231665","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231665","url":null,"abstract":"This paper deals with the detection of blurred images that may eventually be captured by a drone. The embedded system should be able to measure the amount of blur affecting the images in order to decide whether to acquire the scene again or not. For this purpose, we have developed a simple model based on Discrete Cosine Transform (DCT) associated to Support Vector Machine Classifier SVM, to classify images into three categories and thus detect strongly, moderately and slightly blurred images. The proposed system has been tested on 550 images captured by a drone. The obtained results are very conclusive.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126015343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Fast pore matching method based on core point alignment and orientation 基于核心点对准和定向的快速孔隙匹配方法
Houda Khmila, I. Kallel, Sami Barhoumi, N. Smaoui, H. Derbel
{"title":"Fast pore matching method based on core point alignment and orientation","authors":"Houda Khmila, I. Kallel, Sami Barhoumi, N. Smaoui, H. Derbel","doi":"10.1109/ATSIP49331.2020.9231829","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231829","url":null,"abstract":"Nowadays, high-resolution fingerprint images are more and more used in the fingerprint recognition systems thanks to the recognition accuracy that they provide. Indeed, they offer more sufficient details such as sweat pores, ridges, contours, and other details. Pores have been adopted to be one of the brilliant nominees in improving the efficiency of automated fingerprint identification systems to maintain a high level of security. However, the geometric transformations, that occur during the acquisition phase, can cause several defects on the result of the matching process, hence they decline the accuracy of the recognition. To overcome this problem, alignment is often needed. This image pretreatment is classically based on complex geometric operations that are time-consuming. Otherwise, for pore matching, the majority of approaches are based only on pore coordinates. In this paper, we propose a novel pore matching method based, firstly, on only one of the singular points, namely the core points for the alignment phase, and also the valuable features used for the score calculation namely position and the orientation of pores. We assess our proposed approach using the PolyU-HRF database and we compare it to some well-known approaches of level 3 fingerprint recognition. The experimental results demonstrate that the proposed method can achieve significant performance recognition accuracy across various qualities of fingerprint images.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125154389","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Low Cost and Low Power Stacked Sparse Autoencoder Hardware Acceleration for Deep Learning Edge Computing Applications 面向深度学习边缘计算应用的低成本低功耗堆叠稀疏自编码器硬件加速
T. Belabed, M. G. Coutinho, Marcelo A. C. Fernandes, C. Valderrama, C. Souani
{"title":"Low Cost and Low Power Stacked Sparse Autoencoder Hardware Acceleration for Deep Learning Edge Computing Applications","authors":"T. Belabed, M. G. Coutinho, Marcelo A. C. Fernandes, C. Valderrama, C. Souani","doi":"10.1109/ATSIP49331.2020.9231748","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231748","url":null,"abstract":"Nowadays, Deep Learning DL becoming more and more interesting in many areas, such as genomics, security, data analysis, image, and video processing. However, DL requires more and more powerful and parallel computing. The calculation performed by super-machines equipped with powerful processors, such as the latest GPUs. Despite their power, these computing units consume a lot of energy, which makes their use very difficult in small embedded systems and edge computing. To overcome the problem for which we must keep the maximum performance and satisfy the power constraint, it is necessary to use a heterogeneous strategy. Some solutions are promising when using less energyconsuming electronic circuits, such as FPGAs associated with less expensive topologies such as Stacked Sparse Autoencoders. Our target architecture is the Xilinx ZYNQ 7020 SoC, which combines a dual-core ARM processor and an FPGA in the same chip. In the interest of flexibility, we decided to leverage the performance of Xilinx’s high-level synthesis tools, evaluate and choose the best solution in terms of size and performance of the data exchange, synchronization and pipeline processing. The results show that our implementation gives high performance at very low energy consumption. Indeed, the evaluation of our accelerator shows that it can classify 1160 MNIST images per second, consuming only 0.443 W; 2.4 W for the entire system. More than the low energy consumption and the high performance, the platform used only costs $ 125.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134421327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Binary hierarchical multiclass classifier for uncertain numerical features 不确定数值特征的二元层次多类分类器
Marwa Chakroun, Amal Charfi, Sonda Ammar Bouhamed, I. Kallel, B. Solaiman, H. Derbel
{"title":"Binary hierarchical multiclass classifier for uncertain numerical features","authors":"Marwa Chakroun, Amal Charfi, Sonda Ammar Bouhamed, I. Kallel, B. Solaiman, H. Derbel","doi":"10.1109/ATSIP49331.2020.9231804","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231804","url":null,"abstract":"Real-world multiclass classification problems involve moderately high dimensional inputs with a large number of class labels. As well, for most real-world applications, uncertainty has to be handled carefully, unless the classification results could be inaccurate or even incorrect. In this paper, we investigate a binary hierarchical partitioning of the output space in an uncertain framework to overcome these limitations and yield better solutions. Uncertainty is modeled within the quantitative possibility theory framework. Experimentations on real ultrasonic dataset show good performances of the proposed multiclass classifier. An accuracy rate of 93% has been achieved.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132860738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Identification of the user by using a hardware device 通过使用硬件设备来标识用户
H. Hamam
{"title":"Identification of the user by using a hardware device","authors":"H. Hamam","doi":"10.1109/ATSIP49331.2020.9231602","DOIUrl":"https://doi.org/10.1109/ATSIP49331.2020.9231602","url":null,"abstract":"In real life, people treat with their interlocutors face-to-face. In virtual life, our interlocutors are behind the walls of internet, and we do not whether they are human beings or programs. Thus, an issue of identity rises. Special attention is given to on-line banking since it is a delicate issue. We propose a hybrid software/hardware solution to overcome this problem of identity identification. The bank provides the client with a hardware device including a set of passwords. Each password is valid for only one on-line transaction. If a password is intercepted by an unauthorized person then it is useless. The password is entered by a device with a USB connector after a validation of the identity through fingerprints or other biometric measures. The concept has been validated by designing a USB card including a fingerprint reader.","PeriodicalId":384018,"journal":{"name":"2020 5th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114448644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信