Vahid Mohammadi, S. Minaei, A. Mahdavian, M. Khoshtaghaza, P. Gouton
{"title":"Estimation of Leaf Area in Bell Pepper Plant using Image Processing techniques and Artificial Neural Networks","authors":"Vahid Mohammadi, S. Minaei, A. Mahdavian, M. Khoshtaghaza, P. Gouton","doi":"10.1109/ICSIPA52582.2021.9576778","DOIUrl":"https://doi.org/10.1109/ICSIPA52582.2021.9576778","url":null,"abstract":"Measurement and estimation of physical properties of plant leaves have always been considered as important requirements for monitoring and optimizing of plant growth. This study aimed at utilization of image processing and artificial intelligence techniques for non-invasive and non-destructive estimation of bell pepper leaves properties in the first month of growth. Physical properties of bell pepper plant leaves were extracted from RGB images. The algorithm makes use of gradient magnitude and watershed image. Leaf area as the most important index of growth was estimated as a function of other physical parameters including leaf length, width, perimeter etc. Using stereo imaging, the leaf distance from the camera was measured and applied in pixel-wise calculations. Artificial neural networks (ANN) were trained based on a database of actual values of leaf properties (i.e. 311 bell-pepper plant leaves). The success rate of the developed algorithm for detection and separation of leaves was 84.32%. The Multilayer Perceptron (MLP) network could successfully estimate the leaf area values with a validation performance of 0.912.","PeriodicalId":326688,"journal":{"name":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133472460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dimitris Arabadjis, C. Papaodysseus, A. R. Mamatsis
{"title":"Identification of the Writer of Historical Documents via Geometric Modeling of the Handwriting","authors":"Dimitris Arabadjis, C. Papaodysseus, A. R. Mamatsis","doi":"10.1109/ICSIPA52582.2021.9576800","DOIUrl":"https://doi.org/10.1109/ICSIPA52582.2021.9576800","url":null,"abstract":"In this work, a generic framework is developed that simultaneously embeds 2D shapes’ registration, comparison and grouping, via the assumption that shapes of the same class are distorted level-sets of the same implicit function. The corresponding system is developed so as to deal with the automatic classification of documents according to their writers. The data that the system processes are the realizations of the individual characters, which are mutually aligned per document and per character, modulo affine transformations, and then reduced to a single representative shape. Stationarity conditions of these representatives are then used to statistically test the hypothesis that different documents bear a single representative. The considered documents are grouped according to their writer, by determining the maximal groups that also maximize the joint probability of the classification, computed over the characters, which occur in all documents. We have tested the system on 26 pages of Byzantine manuscripts that preserve Iliad. The computed classification of these pages in 4 writers has been independently verified by expert scientists.","PeriodicalId":326688,"journal":{"name":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122184483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Khan, M. N. Mohd, R. M. Larik, Muhammad Danial Khan, Muhammad Inam Abbasi, Susama Bagchi
{"title":"A Smart Flight Controller based on Reinforcement Learning for Unmanned Aerial Vehicle (UAV)","authors":"F. Khan, M. N. Mohd, R. M. Larik, Muhammad Danial Khan, Muhammad Inam Abbasi, Susama Bagchi","doi":"10.1109/ICSIPA52582.2021.9576806","DOIUrl":"https://doi.org/10.1109/ICSIPA52582.2021.9576806","url":null,"abstract":"Traditional flight controllers consist of Proportional Integral Derivates (PID), that although have dominant stability control but required high human interventions. In this study, a smart flight controller is developed for controlling UAVs which produces operator less mechanisms for flight controllers. It uses a neural network that has been trained using reinforcement learning techniques. Engineered with a variety of actuators (pitch, yaw, roll, and speed), the next-generation flight controller is directly trained to control its own decisions in flight. It also optimizes learning algorithms different from the traditional Actor and Critic networks. The agent gets state information from the environment and calculates the reward function depending on the sensors data from the environment. The agent then receives the observations to identify the state and reward functions and the agent activates the algorithm to perform actions. It shows the performance of a trained neural network consisting of a reward function in both simulation and real-time UAV control. Experimental results show that it can respond with relative precision. Using the same framework shows that UAVs can reliably hover in the air, even under adverse initialization conditions with obstacles. Reward functions computed during the flight for 2500, 5000, 7500 and 10000 episodes between the normalized values 0 and −4000. The computation time observed during each episode is 15 micro sec.","PeriodicalId":326688,"journal":{"name":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"41 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128918875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Nom Document Background Removal Using Generative Adversarial Network","authors":"Loc Ho, S. Tran, Dinh Dien","doi":"10.1109/ICSIPA52582.2021.9576764","DOIUrl":"https://doi.org/10.1109/ICSIPA52582.2021.9576764","url":null,"abstract":"In this research, we present a new technique to improve the performance of a Nom-character recognition system. Nom-character recognition is a challenging problem in pattern recognition. Especially these characters are not only blurred or distorted in a paper of a historical document containing ink strokes and symbols created by readers. Generative Adversarial Network (GAN) is one of the advanced versions of deep neural networks applied to generate artificial photos of objects [28]. Many versions of GAN have been malfunctioned recently to help the learning process be more stable and realistic to maximize features extracted from the data. We have been using a recent version of GAN to extract characters from images with complex backgrounds and brightness. This task is to retrieve clean text images from complex and noisy background sources. To the best of our knowledge, we perform the test on the Nom Dataset, which characterizes by multiple noise forms. The results demonstrate that this approach can help to improve any Nom-character recognition system.","PeriodicalId":326688,"journal":{"name":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117339516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Study on Staircase Artifacts in Total Variation Image Restoration","authors":"T. Adam, M. Hassan, R. Paramesran","doi":"10.1109/ICSIPA52582.2021.9576763","DOIUrl":"https://doi.org/10.1109/ICSIPA52582.2021.9576763","url":null,"abstract":"The total variation (TV) regularization is used in various image processing domains such as image super-resolution, reconstruction, compressed sensing, and restoration mainly due to its edge-preserving capabilities. However, the main problem when using the TV regularization is the staircase artifacts. For image restoration, the staircase artifacts manifest themselves by producing a smeared and blocky restored image, especially when the noise level is high. This problem has been a long-standing problem, and various improvements to TV regularization have been proposed. This paper studies the effects of the staircase artifacts produced by two different noises; Gaussian noise and salt-and-pepper noise. For this purpose, we compare three well-known algorithms, the alternating direction method of multipliers (ADMM), alternating minimization (AM), and accelerated AM, and observe the effects of staircase artifacts produced between the three algorithms. As a by-product, the accelerated AM tested for the salt-and-pepper noise can be seen as a new extension of the existing accelerated AM method. Results show that it is interesting to study further the effects of different types of noise and the algorithms to mitigate the staircase artifacts produced.","PeriodicalId":326688,"journal":{"name":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121104895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Deep Learning Radio Frequency Signal Classification with Hybrid Images","authors":"Hilal Elyousseph, M. Altamimi","doi":"10.1109/ICSIPA52582.2021.9576786","DOIUrl":"https://doi.org/10.1109/ICSIPA52582.2021.9576786","url":null,"abstract":"In recent years, Deep Learning (DL) has been successfully applied to detect and classify Radio Frequency (RF) Signals. A DL approach is especially useful since it identifies the presence of a signal without needing full protocol information, and can also detect and/or classify non-communication waveforms, such as radar signals. This work focuses on the different pre-processing steps that can be used on the input training data, and tests the results on a fixed DL architecture. While previous works have mostly focused exclusively on either time-domain or frequency domain approaches, in this work a hybrid image is proposed that takes advantage of both time and frequency domain information, and tackles the classification as a Computer Vision problem. The initial results point out limitations to classical pre-processing approaches while also showing that it’s possible to build a classifier that can leverage the strengths of multiple signal representations.","PeriodicalId":326688,"journal":{"name":"2021 IEEE International Conference on Signal and Image Processing Applications (ICSIPA)","volume":"285 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132367092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}