{"title":"Neural controller of autonomous driving mobile robot by an embedded camera","authors":"Hajer Omrane, M. Masmoudi, M. Masmoudi","doi":"10.1109/ATSIP.2018.8364445","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364445","url":null,"abstract":"The purpose of this paper is to build an autonomous RC Car that uses Artificial Neural Network (ANN) for control. It describes the theory behind the neural network and autonomous vehicles, and how a prototype with a camera as its only input can be designed to test and evaluate the algorithm capabilities. The ANN is a good algorithm that could help recognize patterns in an image, it can with a training set, containing 2000 images, classify an image with 96% of accuracy rate. The main contribution of this paper consists in using a single camera for navigation, possibly for obstacle avoidance.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"190 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114745860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Forecasting of the normalized difference vegetation index time series in Jbeniana","authors":"Marwa Hachicha, M. Louati, A. Kallel","doi":"10.1109/ATSIP.2018.8364499","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364499","url":null,"abstract":"This paper deals with the study of the vegetation cover variation through the Normalized Difference Vegetation Index (NDVI) in the area of Jbeniana (Sfax, Tunisia). For this purpose, we use the images given by Sentinel-2. This allows us to present the time series of NDVI. Moreover, we use some statistical models such as smoothing and autoregressive models to forecast the NDVI time series. Obtained results validate the proposed approach.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129899867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An adaptive semantic dimensionality reduction approach for hyperspectral imagery classification","authors":"Rawaa Hamdi, A. Sellami, I. Farah","doi":"10.1109/ATSIP.2018.8364504","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364504","url":null,"abstract":"Hyperspectral imagery (HSI) is widely used for several fields of remote sensing such as agriculture, land cover monitoring, and deforestation. However, the HSI classification is a challenge task due to the large number of spectral bands, unavailability of training samples, and the high correlation inter-bands. To address these challenges, we propose in this work a semantic reduction dimensionality approach based on the principal component analysis (PCA) and mutual information-based band selection (MI). Firstly, we project the original HSI using PCA to obtain a novel subspace with lower dimensions. Using the obtained components, a set of rules can be generated to find the relevant spectral bands based on score contribution coefficient. Moreover, the mutual information (MI) is used to select the spectral bands that contain a higher information based on the entropy criterion. We propose then to exploit the selected bands for HSI classification using SVM technique. Experiment results demonstrate that our proposed approach is effective and perform for HSI classification compared to other dimensionality reduction approaches.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128714162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Comparing the scalogram images of the Beta-Globin gene in Homosapiens and Pan Troglodytes","authors":"Imen Messaoudi, A. Oueslati, Z. Lachiri","doi":"10.1109/ATSIP.2018.8364466","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364466","url":null,"abstract":"Gene sequences have enabled investigators to explore questions of genomes complexity in related organisms. The ß-globin gene is a successful example which is widely used in comparative genomics and evolutionary analyses. Herein, we report the results of a comparative DNA sequence analysis of the homologous β-globin genes in two taxa: Homosapies (human) and Pan Troglodytes (Chimpanzee). To this end, we propose a new method to represent the DNA sequences into scalogram images based on the complex Morlet wavelet transform. As for the coding method which transforms the DNA letters into a signal, we use the Frequency Chaos Game Signal of trinucleotides (coined FCGS3). The statistical analysis of the two HBB-gene images shows good correlation between the proposed method and the DNA alignment result.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130980481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"FFT implementation and optimization on FPGA","authors":"T. Belabed, S. Jemmali, C. Souani","doi":"10.1109/ATSIP.2018.8364454","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364454","url":null,"abstract":"Nowadays, the development of the Fast Fourier Transform (FFT) remains of a great importance due to its substantial role in the field of signal processing and imagery. This latter still attracts the attention of several researchers around the globe. In this paper, an optimized design of the FFT using the radix-2 algorithm, 32 point is proposed. The developed architecture was implemented using an FPGA regarding its flexibility as well as its parallelism and its computational speed. Though, the material resources of the FPGA are limited, particularly the integrated DSP blocks, a new calculation approach was introduced during the VHDL description with the aim to reduce the necessary number of multiplication operation. The experimental validation of the adopted architecture was realized using a Virtex 6, where the numerical synthesis and the post and route described in VHDL was realized using ISE Design Suite 14.7.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"181 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116062983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nesrine Bnouni, Olfa Mechi, I. Rekik, M. S. Rhim, N. Amara
{"title":"Semi-automatic lymph node segmentation and classification using cervical cancer MR imaging","authors":"Nesrine Bnouni, Olfa Mechi, I. Rekik, M. S. Rhim, N. Amara","doi":"10.1109/ATSIP.2018.8364480","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364480","url":null,"abstract":"The segmentation and classification of Lymph Nodes (LNs) is a fundamental but challenging step in the analysis of medical images of cervical cancer. Both tasks can leverage morphological features such as size, shape, contour, and heterogeneous appearance. However, these features might vary with the progressive state of LNs. Hence, accurate detection of LNs boundary is an essential step sing to classify LN as suspect (malignant) and non-suspect (benign). However, manual delineation of LNs might produce classification errors due to the inter and intra-observer variability. Semi-automatic and automatic LNs segmentation methods are greatly desired as they would help improve patient diagnosis and treatment processes. Currently, Magnetic Resonance Imaging (MRI) is widely used to diagnose cervical cancer and LN involvement. Diffusion Weighted (DW)-MRI exhibits metastatic LN as bright regions. This paper presents a semi-automatic segmentation and classification method of LNs. Specifically, we propose a novel approach which leverages (1) the complementarity of structural and diffusion MR images through a fusion step and (2) morphological features of the segmented metastatic LNs for classification. The contribution of our proposed algorithm is threefold. First, we fuse the axial T2-Weighted (T2-w) anatomical image and the DW image. Second, we detect LNs using region-growing method in order to compute the final classification. Third, segmentation results are then used to classify LNs based on a gray level dependency matrix technique which extracts LN features. We evaluated our method using 10 MR images T2-w and DW with 47 metastatic LNs. We obtained an average accuracy of 70.21% for cervical cancer nodule classification.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121639151","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Non blind image restoration scheme combining parametric wiener filtering and BM3D denoising technique","authors":"Zouhair Mbarki, H. Seddik, E. B. Braiek","doi":"10.1109/ATSIP.2018.8364524","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364524","url":null,"abstract":"Image restoration technique is the operation of taking a noisy image and estimating the clean, original image. The main goal of non-blind image restoration is to estimate the true image assuming the blur is known. A fundamental method in the filtering theory used commonly for image restoration is the Wiener filter. The drawback of this method is the need for a priori knowledge of the degradation function, the blurred image and the statistical properties of the noise process. In this work, a non blind image restoration algorithm using the parametric wiener filtering and BM3D denoising technique has been proposed. Firstly, the degraded image is deconvoluted in Fourier space by parametric Wiener filtering, and then, it is smoothed by the BM3D technique. Experimental results are interesting and encouraging.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122607824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi-automatic algorithm for 3D volume reconstruction of inner ear structures based on CT-scan images","authors":"Adil Bouchana, Jamal Kharroubi, Mohammed Ridal","doi":"10.1109/ATSIP.2018.8364474","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364474","url":null,"abstract":"Inner ear volume rendering and segmentation are very difficult tasks due to the nature, the size and position of the structure in the temporal bone. However, it is highly useful to reconstruct three-dimensional volume of computed tomography images which will help in determining the anatomy and topographic relationship between various important structures in the temporal bone, as well as the understanding of the anatomical structures of the inner ear. It can be also very useful in making easier the diagnosis of inner ear diseases. We propose a novel approach with minor intervention of specialists, and region growing segmentation, allowing 3D volume reconstruction of inner ear structures form Computed Tomography (CT) images. The obtained results achieve high quality of segmentation and visualization in addition to usability for other systems, such as computer aided diagnosis (CAD) systems.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128611286","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sonda Ammar Bouhamed, Marwa Chakroun, I. Kallel, H. Derbel
{"title":"Haralick feature selection for material rigidity recognition using ultrasound echo","authors":"Sonda Ammar Bouhamed, Marwa Chakroun, I. Kallel, H. Derbel","doi":"10.1109/ATSIP.2018.8364483","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364483","url":null,"abstract":"Object classification based on its rigidity requires principally the recognition of its material consistency. Generally, material consistency can be divided into two families, hard material and soft one. In this context, a new approach based on ultrasonic signal for consistency recognition of object materials is proposed. This approach allows distinguishing between the hard and the soft objects. Material consistency determination is based on Haralick's texture features. Then, a feature selection step is considered to select the most discriminative features. Only three Haralick features were used to assess the efficiency classifications models. As there is no affording dataset of ultrasonic signals acquired for material rigidity recognition, we develop our dataset using two ultrasonic sensors. In this context, no previous work has considered such a challenge. The analysis results show that three parameters (entropy, sum of entropy and variance) were found to be effective to discriminate between the two classes of material rigidity. The obtained results show the efficiency of the proposed method.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127825317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semi fragile watermarking scheme for image recovery in wavelet domain","authors":"Hanen Rhayma, A. Makhloufi, H. Hamam, A. Hmida","doi":"10.1109/ATSIP.2018.8364447","DOIUrl":"https://doi.org/10.1109/ATSIP.2018.8364447","url":null,"abstract":"Image authentication watermarking scheme can substantially solve the security of digital images transmitted through insecure channel. In this paper we propose a semi-fragile watermarking scheme for image authentication, localizing and recovering. The approximation sub-band of the second Discrete Wavelet Transformation (DWT), LL2 is used as recovery watermark while the fifth approximation sub-band LL5 is used as authentication and localizing watermark. The two watermarks are embedded into the first approximation sub-band LL1 using the Quantization Index Modulation (QIM). To reduce the size of the recovery watermark, Data Representation through Combination (DRC) is practically used. The experimental results show that our proposed algorithm can resist JPEG compression and give an acceptable estimation of the watermarked image even after the watermarked image has been tampered.","PeriodicalId":332253,"journal":{"name":"2018 4th International Conference on Advanced Technologies for Signal and Image Processing (ATSIP)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125620723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}