2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)最新文献

筛选
英文 中文
Effective monitoring of an air quality network 有效监察空气质素网络
R. Baklouti, A. Hamida, M. Mansouri, M. Harkat, M. Nounou, H. Nounou
{"title":"Effective monitoring of an air quality network","authors":"R. Baklouti, A. Hamida, M. Mansouri, M. Harkat, M. Nounou, H. Nounou","doi":"10.1109/ATSIP.2018.8364488","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364488","url":null,"abstract":"Air pollution in urban areas could be considered as one of the most dangerous types of pollution that can cause impact health and the ecosystem. Hence, monitoring air quality networks has captivated the interest of various research studies. In this context, this paper deals with Fault Detection of an Air Quality Monitoring Network. The proposed approach is based on nonlinear principal component analysis to cope with modeling of nonlinear data. In addition, the fault detection would be improved by combining exponentially weighted moving average with hypothesis testing technique: generalized likelihood ratio test. The evaluation was carried out on an Air Quality Monitoring Network (AQMN). The results revealed a good results compared to the classical PCA.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123791958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Ergodic capacity for fading channels in cognitive radio networks 认知无线电网络中衰落信道的遍历容量
Raouia Masmoudi
{"title":"Ergodic capacity for fading channels in cognitive radio networks","authors":"Raouia Masmoudi","doi":"10.1109/ATSIP.2018.8364519","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364519","url":null,"abstract":"In this paper, we consider a Cognitive Radio (CR) system where two type of users try to access to the primary spectrum : a primary user (PU) owning the spectrum license and a secondary user (SU) who does not own the spectrum license. However, the secondary communication is allowed to coexist with the primary communication as long as the interference caused by SU to PU is below a tolerable threshold. We study the optimization problem which maximizes the SU's achievable ergodic capacity under different types of power constraints and for different fading channel models. Our goal is to calculate the optimal power allocation strategies for these optimization problems. We show that modelling with Rayleigh fading for the channel between SU transmitter and PU receiver is an advantageous way to ameliorate the SU ergodic capacity. Furthermore, we consider four combinations of power constraints, since the interference power constraint and the transmit power constraint can be restricted by a peak or an average threshold. We also show that the SU ergodic capacity under average transmit power constraint and average interference power constraint outperforms the one with peak power constraints. In this case, we propose a novel decoupling method. Our method reduces the complexity of the initial problem and makes our initial problem easier to solve.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124748957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A computer vision system to detect diving cases in soccer 一种检测足球假摔的计算机视觉系统
Hana' Al-Theiabat, Inad A. Aljarrah
{"title":"A computer vision system to detect diving cases in soccer","authors":"Hana' Al-Theiabat, Inad A. Aljarrah","doi":"10.1109/ATSIP.2018.8364457","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364457","url":null,"abstract":"Recently, motion analysis systems have been getting a lot of attention due to their potential in human motion analysis, which has a wide range of applications. One of these applications is analyzing tackle scenes in soccer games. In a tackle scene, players occasionally tend to deceive the referee by intentionally falling to get a free or penalty kick. In this paper, we propose a system to program human body tracking in order to analyze tackle scenes in soccer games. The main idea behind this system is to determine whether the falling player in the tackle scene is attempting to deceive the referee (diving) or not. In this system, the tackle scene goes through five main stages of processing; identification of the falling player, extraction of tracking points, motion tracking, features extraction and scene classification. The tracking component is implemented using Kanade-Lucas-Tomasi optical flow with the aid of pyramid levels and forward-backward error algorithm, while the classification is carried out using Weka software with Naive Bayes tree (NB tree) classifier. The proposed system is implemented and its performance is experimentally tested. The results show a potential to detect diving cases (deceiving in falling), with a classification accuracy of 84%.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129889941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Long-term superpixel tracking using unsupervised learning and multi-step integration 使用无监督学习和多步集成的长期超像素跟踪
Pierre-Henri Conze, F. Tilquin, M. Lamard, F. Heitz, G. Quellec
{"title":"Long-term superpixel tracking using unsupervised learning and multi-step integration","authors":"Pierre-Henri Conze, F. Tilquin, M. Lamard, F. Heitz, G. Quellec","doi":"10.1109/ATSIP.2018.8364453","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364453","url":null,"abstract":"In this paper, we analyze how to accurately track superpixels over extended time periods for computer vision applications. A two-step video processing pipeline dedicated to long-term superpixel tracking is proposed based on unsupervised learning and temporal integration. First, unsupervised learning-based matching provides superpixel correspondences between consecutive and distant frames using context-rich features extended from greyscale to multi-channel. Resulting elementary matches are then combined along multi-step paths running through the whole sequence with various inter-frame distances. This produces a large set of candidate long-term superpixel pairings upon which majority voting is performed. Video object tracking experiments demonstrate the efficiency of this pipeline against state-of-the-art methods.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127907451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Efficient image texture and structure preservation via a Rapid diffusion function with accelerator 有效的图像纹理和结构保存通过快速扩散功能与加速器
S. Tebini, H. Seddik, E. B. Braiek
{"title":"Efficient image texture and structure preservation via a Rapid diffusion function with accelerator","authors":"S. Tebini, H. Seddik, E. B. Braiek","doi":"10.1109/ATSIP.2018.8364526","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364526","url":null,"abstract":"This paper proposes a new fast diffusion function with an accelerator coefficient for image restoration task. The suggested algorithm is carried out based on gradient magnitude and suitably filter the image while preserving edge and texture. Several comparisons with recent works is given to show the efficiency of the new conduction function. This performance is demonstrated using quantitative (PSNR, MSSIM) and qualitive metrics (visual evaluation).","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130292269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Feature selection and classification using multiple kernel learning for brain tumor segmentation 基于多核学习的脑肿瘤分割特征选择与分类
Naouel Boughattas, Maxime Bérar, K. Hamrouni, S. Ruan
{"title":"Feature selection and classification using multiple kernel learning for brain tumor segmentation","authors":"Naouel Boughattas, Maxime Bérar, K. Hamrouni, S. Ruan","doi":"10.1109/ATSIP.2018.8364470","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364470","url":null,"abstract":"We propose a brain tumor segmentation method from multi-sequence images. The method selects the most relevant features and segments edema and tumor using a classification algorithm based on Multiple Kernel Learning (MKL). Using MKL algorithm, we can associate one or more kernels to each feature. Each kernel is associated to a weight reflecting its importance in the classification. A sparsity constraint on the kernel weights allows to force same weights to be equal to zero corresponding to insignificant kernels (non informative features). Our method was evaluated on real patient dataset of the MICCAI 2012 BraTS challenge. The results show that our method is competitive to the winning methods.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132470382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
Electromagnetic study of the breast for biomedical applications 用于生物医学应用的乳房电磁研究
Marwa Slimi, P. Mendes, B. Jmai, A. Gharsallah
{"title":"Electromagnetic study of the breast for biomedical applications","authors":"Marwa Slimi, P. Mendes, B. Jmai, A. Gharsallah","doi":"10.1109/ATSIP.2018.8364478","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364478","url":null,"abstract":"This work presents an electromagnetic test of the breast for biomedical applications. The test is given in the Industrial Scientific Medical (ISM) band where the frequency equal to 2.4 GHz. The simulation results of the electromagnetic aspects of the proposed test are given by Computer Simulation Technology Microwave Studio (CST-MWS) based on finite integration technique (FIT) method. The treated parameter for the immunity test of the breast is the Specific Absorption Rate (SAR). We propose four models of the breast phantom to obtain a comparative study in terms of the size effect and the thickness of different layers. The simulation results show that the immunity of the breast is depends to the Power energy and the breast size.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123820984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Towards change detection in bi-temporal images using evidential conflict 基于证据冲突的双时相图像变化检测研究
Fatma Haouas, Z. B. Dhiaf, A. Hammouda, B. Solaiman
{"title":"Towards change detection in bi-temporal images using evidential conflict","authors":"Fatma Haouas, Z. B. Dhiaf, A. Hammouda, B. Solaiman","doi":"10.1109/ATSIP.2018.8364335","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364335","url":null,"abstract":"This paper exposes new method of remote sensed imagery change detection based on the framework of Dempster-Shafer. The method is established on multi-temporal conflict analysis and interpretation, where it was used as a new index of change. In consequence, a pre-change card was produced from the multi-temporal conflict card between the two bi-temporal images and which was deduced from the empty-set mass. It is the proof that the Dempster-Shafer Theory can be applied in a new way for change detection where the conflict imperfection allows mining reliable and not trivial information about change. The effectiveness of the conflict for multi-temporal change mapping was demonstrated using bi-temporal Landsat imagery.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125629344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
State-of-the-art face recognition performance using publicly available software and datasets 最先进的面部识别性能使用公开可用的软件和数据集
Mohamed Amine Hmani, D. Petrovska-Delacrétaz
{"title":"State-of-the-art face recognition performance using publicly available software and datasets","authors":"Mohamed Amine Hmani, D. Petrovska-Delacrétaz","doi":"10.1109/ATSIP.2018.8364450","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364450","url":null,"abstract":"We are interested in the reproducibility of face recognition systems. By reproducibility we mean: is the scientific community, and are the researchers from different sides, capable of reproducing the last published results by a big company, that has at its disposal huge computational power and huge proprietary databases? With the constant advancements in GPU computation power and availability of open-source software, the reproducibility of published results should not be a problem. But, if architectures of the systems are private and databases are proprietary, the reproducibility of published results can not be easily attained. To tackle this problem, we focus on training and evaluation of face recognition systems on publicly available data and software. We are also interested in comparing the best Deep Neural Net (DNN) based results with a baseline “classical” system. This paper exploits the OpenFace open-source system to generate a deep convolutional neural network model using publicly available datasets. We study the impact of the size of the datasets, their quality and compare the performance to a classical face recognition approach. Our focus is to have a fully reproducible model. To this end, we used publicly available datasets (FRGC, MS-celeb-lM, MOBIO, LFW), as well publicly available software (OpenFace) to train our model in order to do face recognition. Our best trained model achieves 97.52% accuracy on the Labelled in the Wild dataset (LFW) dataset which is lower than Google's best reported results of 99.96% but slightly better than FaceBook's reported result of 97.35%. We also evaluated our best model on the challenging video dataset MOBIO and report competitive results with the best reported results on this database.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131425313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
A real-time emotion recognition system for disabled persons 面向残疾人的实时情感识别系统
Y. Rabhi, M. Mrabet, F. Fnaiech, M. Sayadi
{"title":"A real-time emotion recognition system for disabled persons","authors":"Y. Rabhi, M. Mrabet, F. Fnaiech, M. Sayadi","doi":"10.1109/ATSIP.2018.8364339","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364339","url":null,"abstract":"In order to assure a safe navigation for an electric wheelchair user, both environment and user must be kept under surveillance for any potential endangering act, whether they are intentional, or unintentional. This paper proposes a real-time embedded emotion recognition system designed for an electric wheelchair to detect, exploit and evaluate the emotional state of an elder user or a user with some cognitive impairment. An RPI camera board connected to a raspberry PI2 modal B processing device is employed to capture frames from a recorded video of the user's facial expressions variation, the captured frames will then be processed using a python script to detect the face and recognize the apparent emotion. A set of various techniques are employed for face detection, facial feature extraction, and emotion classification such as HOG, regression trees, and PCA.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115071644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信