R. Baklouti, A. Hamida, M. Mansouri, M. Harkat, M. Nounou, H. Nounou
{"title":"Effective monitoring of an air quality network","authors":"R. Baklouti, A. Hamida, M. Mansouri, M. Harkat, M. Nounou, H. Nounou","doi":"10.1109/ATSIP.2018.8364488","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364488","url":null,"abstract":"Air pollution in urban areas could be considered as one of the most dangerous types of pollution that can cause impact health and the ecosystem. Hence, monitoring air quality networks has captivated the interest of various research studies. In this context, this paper deals with Fault Detection of an Air Quality Monitoring Network. The proposed approach is based on nonlinear principal component analysis to cope with modeling of nonlinear data. In addition, the fault detection would be improved by combining exponentially weighted moving average with hypothesis testing technique: generalized likelihood ratio test. The evaluation was carried out on an Air Quality Monitoring Network (AQMN). The results revealed a good results compared to the classical PCA.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"107 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123791958","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Ergodic capacity for fading channels in cognitive radio networks","authors":"Raouia Masmoudi","doi":"10.1109/ATSIP.2018.8364519","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364519","url":null,"abstract":"In this paper, we consider a Cognitive Radio (CR) system where two type of users try to access to the primary spectrum : a primary user (PU) owning the spectrum license and a secondary user (SU) who does not own the spectrum license. However, the secondary communication is allowed to coexist with the primary communication as long as the interference caused by SU to PU is below a tolerable threshold. We study the optimization problem which maximizes the SU's achievable ergodic capacity under different types of power constraints and for different fading channel models. Our goal is to calculate the optimal power allocation strategies for these optimization problems. We show that modelling with Rayleigh fading for the channel between SU transmitter and PU receiver is an advantageous way to ameliorate the SU ergodic capacity. Furthermore, we consider four combinations of power constraints, since the interference power constraint and the transmit power constraint can be restricted by a peak or an average threshold. We also show that the SU ergodic capacity under average transmit power constraint and average interference power constraint outperforms the one with peak power constraints. In this case, we propose a novel decoupling method. Our method reduces the complexity of the initial problem and makes our initial problem easier to solve.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124748957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A computer vision system to detect diving cases in soccer","authors":"Hana' Al-Theiabat, Inad A. Aljarrah","doi":"10.1109/ATSIP.2018.8364457","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364457","url":null,"abstract":"Recently, motion analysis systems have been getting a lot of attention due to their potential in human motion analysis, which has a wide range of applications. One of these applications is analyzing tackle scenes in soccer games. In a tackle scene, players occasionally tend to deceive the referee by intentionally falling to get a free or penalty kick. In this paper, we propose a system to program human body tracking in order to analyze tackle scenes in soccer games. The main idea behind this system is to determine whether the falling player in the tackle scene is attempting to deceive the referee (diving) or not. In this system, the tackle scene goes through five main stages of processing; identification of the falling player, extraction of tracking points, motion tracking, features extraction and scene classification. The tracking component is implemented using Kanade-Lucas-Tomasi optical flow with the aid of pyramid levels and forward-backward error algorithm, while the classification is carried out using Weka software with Naive Bayes tree (NB tree) classifier. The proposed system is implemented and its performance is experimentally tested. The results show a potential to detect diving cases (deceiving in falling), with a classification accuracy of 84%.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129889941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pierre-Henri Conze, F. Tilquin, M. Lamard, F. Heitz, G. Quellec
{"title":"Long-term superpixel tracking using unsupervised learning and multi-step integration","authors":"Pierre-Henri Conze, F. Tilquin, M. Lamard, F. Heitz, G. Quellec","doi":"10.1109/ATSIP.2018.8364453","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364453","url":null,"abstract":"In this paper, we analyze how to accurately track superpixels over extended time periods for computer vision applications. A two-step video processing pipeline dedicated to long-term superpixel tracking is proposed based on unsupervised learning and temporal integration. First, unsupervised learning-based matching provides superpixel correspondences between consecutive and distant frames using context-rich features extended from greyscale to multi-channel. Resulting elementary matches are then combined along multi-step paths running through the whole sequence with various inter-frame distances. This produces a large set of candidate long-term superpixel pairings upon which majority voting is performed. Video object tracking experiments demonstrate the efficiency of this pipeline against state-of-the-art methods.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127907451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Efficient image texture and structure preservation via a Rapid diffusion function with accelerator","authors":"S. Tebini, H. Seddik, E. B. Braiek","doi":"10.1109/ATSIP.2018.8364526","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364526","url":null,"abstract":"This paper proposes a new fast diffusion function with an accelerator coefficient for image restoration task. The suggested algorithm is carried out based on gradient magnitude and suitably filter the image while preserving edge and texture. Several comparisons with recent works is given to show the efficiency of the new conduction function. This performance is demonstrated using quantitative (PSNR, MSSIM) and qualitive metrics (visual evaluation).","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130292269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Naouel Boughattas, Maxime Bérar, K. Hamrouni, S. Ruan
{"title":"Feature selection and classification using multiple kernel learning for brain tumor segmentation","authors":"Naouel Boughattas, Maxime Bérar, K. Hamrouni, S. Ruan","doi":"10.1109/ATSIP.2018.8364470","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364470","url":null,"abstract":"We propose a brain tumor segmentation method from multi-sequence images. The method selects the most relevant features and segments edema and tumor using a classification algorithm based on Multiple Kernel Learning (MKL). Using MKL algorithm, we can associate one or more kernels to each feature. Each kernel is associated to a weight reflecting its importance in the classification. A sparsity constraint on the kernel weights allows to force same weights to be equal to zero corresponding to insignificant kernels (non informative features). Our method was evaluated on real patient dataset of the MICCAI 2012 BraTS challenge. The results show that our method is competitive to the winning methods.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132470382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Electromagnetic study of the breast for biomedical applications","authors":"Marwa Slimi, P. Mendes, B. Jmai, A. Gharsallah","doi":"10.1109/ATSIP.2018.8364478","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364478","url":null,"abstract":"This work presents an electromagnetic test of the breast for biomedical applications. The test is given in the Industrial Scientific Medical (ISM) band where the frequency equal to 2.4 GHz. The simulation results of the electromagnetic aspects of the proposed test are given by Computer Simulation Technology Microwave Studio (CST-MWS) based on finite integration technique (FIT) method. The treated parameter for the immunity test of the breast is the Specific Absorption Rate (SAR). We propose four models of the breast phantom to obtain a comparative study in terms of the size effect and the thickness of different layers. The simulation results show that the immunity of the breast is depends to the Power energy and the breast size.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123820984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fatma Haouas, Z. B. Dhiaf, A. Hammouda, B. Solaiman
{"title":"Towards change detection in bi-temporal images using evidential conflict","authors":"Fatma Haouas, Z. B. Dhiaf, A. Hammouda, B. Solaiman","doi":"10.1109/ATSIP.2018.8364335","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364335","url":null,"abstract":"This paper exposes new method of remote sensed imagery change detection based on the framework of Dempster-Shafer. The method is established on multi-temporal conflict analysis and interpretation, where it was used as a new index of change. In consequence, a pre-change card was produced from the multi-temporal conflict card between the two bi-temporal images and which was deduced from the empty-set mass. It is the proof that the Dempster-Shafer Theory can be applied in a new way for change detection where the conflict imperfection allows mining reliable and not trivial information about change. The effectiveness of the conflict for multi-temporal change mapping was demonstrated using bi-temporal Landsat imagery.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125629344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"State-of-the-art face recognition performance using publicly available software and datasets","authors":"Mohamed Amine Hmani, D. Petrovska-Delacrétaz","doi":"10.1109/ATSIP.2018.8364450","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364450","url":null,"abstract":"We are interested in the reproducibility of face recognition systems. By reproducibility we mean: is the scientific community, and are the researchers from different sides, capable of reproducing the last published results by a big company, that has at its disposal huge computational power and huge proprietary databases? With the constant advancements in GPU computation power and availability of open-source software, the reproducibility of published results should not be a problem. But, if architectures of the systems are private and databases are proprietary, the reproducibility of published results can not be easily attained. To tackle this problem, we focus on training and evaluation of face recognition systems on publicly available data and software. We are also interested in comparing the best Deep Neural Net (DNN) based results with a baseline “classical” system. This paper exploits the OpenFace open-source system to generate a deep convolutional neural network model using publicly available datasets. We study the impact of the size of the datasets, their quality and compare the performance to a classical face recognition approach. Our focus is to have a fully reproducible model. To this end, we used publicly available datasets (FRGC, MS-celeb-lM, MOBIO, LFW), as well publicly available software (OpenFace) to train our model in order to do face recognition. Our best trained model achieves 97.52% accuracy on the Labelled in the Wild dataset (LFW) dataset which is lower than Google's best reported results of 99.96% but slightly better than FaceBook's reported result of 97.35%. We also evaluated our best model on the challenging video dataset MOBIO and report competitive results with the best reported results on this database.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131425313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A real-time emotion recognition system for disabled persons","authors":"Y. Rabhi, M. Mrabet, F. Fnaiech, M. Sayadi","doi":"10.1109/ATSIP.2018.8364339","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364339","url":null,"abstract":"In order to assure a safe navigation for an electric wheelchair user, both environment and user must be kept under surveillance for any potential endangering act, whether they are intentional, or unintentional. This paper proposes a real-time embedded emotion recognition system designed for an electric wheelchair to detect, exploit and evaluate the emotional state of an elder user or a user with some cognitive impairment. An RPI camera board connected to a raspberry PI2 modal B processing device is employed to capture frames from a recorded video of the user's facial expressions variation, the captured frames will then be processed using a python script to detect the face and recognize the apparent emotion. A set of various techniques are employed for face detection, facial feature extraction, and emotion classification such as HOG, regression trees, and PCA.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115071644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}