{"title":"Unsupervised machine learning via Hidden Markov Models for accurate clustering of plant stress levels based on imaged chlorophyll fluorescence profiles & their rate of change in time","authors":"Julie Blumenthal, D. Megherbi, R. Lussier","doi":"10.1109/CIVEMSA.2014.6841442","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841442","url":null,"abstract":"Chlorophyll fluorescence (ChlF), a plant response in time to stressors, has long been known to be a useful tool to detect plant stress. Early and accurate plant stress detection is imperative in enabling timely and appropriate intervention. One major limitation of prior work is that, in general, only a few key inflection points of a localized section of a chlorophyll fluorescence signal are used to calculate single index values. These values yield very limited insight into stress level or type. In this paper, we present a method for plant stress classification that uses global (versus local) ChlF time-varying signal data acquired via imaging. We classify this time-varying-intensity-signal using a Hidden Markov Model (HMM). While HMMs have been used in other fields, in this paper we present their first application in the field of plant stress clustering and classification. We show how the proposed selection of a low-pass filtered plant's entire chlorophyll fluorescence signal profile, as a global feature selection, improves the accuracy of plant stress classification. Additionally, we show how the rate of change-in-time of the plant ChlF intensity time-varying profiles further improves the plant stress classification accuracy. Finally, we present experimental results to show the value and potential of the proposed method to enable more accurate and specific classification of plant stressor levels and stressor types.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115289698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Intelligent decision system for measured dust distributions impairing satellite communications","authors":"Omair Butt, K. Harb, S. Abdul-Jauwad","doi":"10.1109/CIVEMSA.2014.6841447","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841447","url":null,"abstract":"Dust and sand storms are regarded as a complex meteorological phenomenon due to high degree of uncorrelated features. Dust particles size distribution, their dielectric constants, visibility level during dusty weather and probable dust storm height are namely a few significant parameters required in modeling the dust storms mathematically. An optimal solution to such weather induced impairments depends on the level of precision in the estimation of aforementioned parameters. This paper presents the experimental dust and sand particles size distribution for Saudi Arabia based on Sieve and Hydrometer tests. Dust storms layered model has been applied to the measured data to compute dust attenuation. Finally, an intelligent decision system has been developed to effectively cater the weather induced signal degradations while maintaining the promised quality of service (QoS) for earth-satellite links. SNR simulation results for measured data before and after incorporating our proposed system depict significant overall improvements.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"104 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115736562","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Virtual calibration environment for a-priori estimation of measurement uncertainty","authors":"C. Gugg, M. Harker, P. O’Leary","doi":"10.1109/CIVEMSA.2014.6841438","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841438","url":null,"abstract":"During product engineering of a measuring instrument, the question is which measures are necessary to achieve the highest possible measurement accuracy. In this context, a measuring instrument's target uncertainty is an essential part of its requirement specifications, because it is an indicator for the measurement's overall quality. This paper introduces an algebraic framework to determine the confidence and prediction intervals of a calibration curve; the matrix based framework greatly simplifies the associated proofs and implementation details. The regression analysis for discrete orthogonal polynomials is derived, and new formulae for the confidence and prediction intervals are presented for the first time. The orthogonal basis functions are numerically more stable and yield more accurate results than the traditional polynomial Vandermonde basis; the methods are thereby directly compared. The new virtual environment for measurement and calibration of cyber-physical systems is well suited for establishing the error propagation chain through an entire measurement system, including complicated tasks such as data fusion. As an example, an adaptable virtual lens model for an optical measurement system is established via a reference measurement. If the same hardware setup is used in different systems, the uncertainty can be estimated a-priori to an individual system's calibration, making it suitable for industrial applications. With this model it is possible to determine the number of required calibration nodes for system level calibration in order to achieve a predefined measurement uncertainty. Hence, with this approach, systematic errors can be greatly reduced and the remaining random error is described by a probabilistic model. Verification is performed via numerical experiments using a non-parametric Kolmogorov-Smirnov test and Monte Carlo simulation.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114663171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An efficient computational intelligence technique for affine-transformation-invariant image face detection, tracking, and recognition in a video stream","authors":"A. J. Myers, D. Megherbi","doi":"10.1109/CIVEMSA.2014.6841444","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841444","url":null,"abstract":"While there are many current approaches to solving the difficulties that come with detecting, tracking, and recognizing a given face in a video sequence, the difficulties arising when there are differences in pose, facial expression, orientation, lighting, scaling, and location remain an open research problem. In this paper we present and perform the study and analysis of a computationally efficient approach for each of the three processes, namely a given template face detection, tracking, and recognition. The proposed algorithms are faster relatively to other existing iterative methods. In particular, we show that unlike such iterative methods, the proposed method does not estimate a given face rotation angle or scaling factor by looking into all possible face rotations or scaling factors. The proposed method looks into segmenting and aligning the distance between two eyes' pupils in a given face image with the image x-axis. Reference face images in a given database are normalized with respect to translation, rotation, and scaling. We show here how the proposed method to estimate a given face image template rotation and scaling factor leads to real-time template image rotation and scaling corrections. This allows the recognition algorithm to be less computationally complex than iterative methods.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130289074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Human movement quantification using Kinect for in-home physical exercise monitoring","authors":"S. Gauthier, A. Crétu","doi":"10.1109/CIVEMSA.2014.6841430","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841430","url":null,"abstract":"The paper proposes a framework for in-home physical exercise monitoring based on a Kinect platform. The analysis goes beyond the state-of-the-art solutions by monitoring more joints and offering more advanced reporting capabilities on the movement such as: the position and trajectory of each joint, the working envelope of each body member, the average velocity, and a measure of the user's fatigue after an exercise sequence. This data can be visualised and compared to a standard (e.g. a healthy user, for rehabilitation purposes) or an ideal performance (e.g. a perfect sport pose for exercising) in order to give the user a measure on his/her own performance and incite his/her motivation to continue the training program. Such information can be used as well by a therapist or professional sports trainer to evaluate the progress of a patient or of a trainee.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131590149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"ImmerVol: An immersive volume visualization system","authors":"N. Khan, M. Kyan, L. Guan","doi":"10.1109/CIVEMSA.2014.6841433","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841433","url":null,"abstract":"Volume visualization is a popular technique for analyzing 3D datasets, especially in the medical domain. An immersive visual environment provides easier navigation through the rendered dataset. However, visualization is only one part of the problem. Finding an appropriate Transfer Function (TF) for mapping color and opacity values in Direct Volume Rendering (DVR) is difficult. This paper combines the benefits of the CAVE Automatic Virtual Environment with a novel approach towards TF generation for DVR, where the traditional low-level color and opacity parameter manipulations are eliminated. The TF generation process is hidden behind a Spherical Self Organizing Map (SSOM). The user interacts with the visual form of the SSOM lattice on a mobile device while viewing the corresponding rendering of the volume dataset in real time in the CAVE. The SSOM lattice is obtained through high-dimensional features extracted from the volume dataset. The color and opacity values of the TF are automatically generated based on the user's perception. Hence, the resulting TF can expose complex structures in the dataset within seconds, which the user can analyze easily and efficiently through complete immersion.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125341914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Immersion and involvement in a 3D training environment: Experimenting different points of view","authors":"A. Rapp, Cristina Gena","doi":"10.1109/CIVEMSA.2014.6841432","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841432","url":null,"abstract":"In this paper we describe an experimental evaluation, focusing on a comparison between a first- and a third-person view experience in a virtual training environment, that uses a chemical-physical simulator to reproduce liquid and gas leakages in the plant. We have compared user performances on self-orientation and object finding tasks using two different points of view: first-person and third-person perspective. Our findings show that a first-person enhance the performances through a major sense of immersion, and thus of involvement, in the object finding tasks. However, there is not significant difference in the performances when the users have to move in the 3D scene and orient themselves.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117001853","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An incremental framework for classification of EEG signals using quantum particle swarm optimization","authors":"Kaveh Hassani, Won-sook Lee","doi":"10.1109/CIVEMSA.2014.6841436","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841436","url":null,"abstract":"Classification of electroencephalographic (EEG) signals is a sophisticated task that determines the accuracy of thought pattern recognition performed by computer-brain interface (BCI) which, in turn, determines the degree of naturalness of the interaction provided by that system. However, classifying the EEG signals is not a trivial task due to their non-stationary characteristics. In this paper, we introduce and utilize incremental quantum particle swarm optimization (IQPSO) algorithm for incremental classification of EEG data stream. IQPSO builds the classification model as a set of explicit rules which benefits from semantic symbolic knowledge representation and enhanced comprehensibility. We compared the performance of IQPSO against ten other classifiers on two EEG datasets. The results suggest that IQPSO outperforms other classifiers in terms of classification accuracy, precision and recall.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"221 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134087110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yifeng He, Ziyang Zhang, Xiaoming Nan, Ning Zhang, Fei Guo, Edward Rosales, L. Guan
{"title":"vConnect: Connect the real world to the virtual world","authors":"Yifeng He, Ziyang Zhang, Xiaoming Nan, Ning Zhang, Fei Guo, Edward Rosales, L. Guan","doi":"10.1109/CIVEMSA.2014.6841434","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841434","url":null,"abstract":"The Cave Automatic Virtual Environment (CAVE) is a fully immersive Virtual Reality (VR) system. CAVE systems have been widely used in many applications, such as architectural and industrial design, medical training and surgical planning, museums and education. However, one limitation for most of the current CAVE systems is that they are separated from the real world. The user in the CAVE is not able to sense the real world around him or her. In this paper, we propose a vConnect architecture, which aims to establish real-time bidirectional information exchange between the virtual world and the real world. Furthermore, we propose finger interactions which enable the user in the CAVE to manipulate the information in a natural and intuitive way. We implemented a vHealth prototype, a CAVE-based real-time health monitoring system, through which we demonstrated that the user in the CAVE can visualize and manipulate the real-time physiological data of the patient who is being monitored, and interact with the patient.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134584014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A study of the effect of feature reduction via statistically significant pixel selection on fruit object representation, classification, and machine learning prediction","authors":"P. Beaulieu, D. Megherbi","doi":"10.1109/CIVEMSA.2014.6841443","DOIUrl":"https://doi.org/10.1109/CIVEMSA.2014.6841443","url":null,"abstract":"Object recognition or classification has been one of the fundamental foundational building blocks of machine intelligence. Over the years several methodologies have been proposed in the literature. In the past couple of decades, two or three methods have been the predominant means of object recognition; namely Principal Component Analysis, Fisher Linear Discriminant Analysis, and correlation. Considering that a human can easily differentiate between different objects even when the objects are partially obscured, a machine, on the other hand, has greater difficulty in differentiating between objects, even when they are un-obscured. There is important information within a given image that determines the type of object the image contains. This paper presents the usage of a 2-sample statistical t-test as a feature-reduction method to choose those feature pixels of a given image that may be more important and significant than others, and their ordering by order of significance based on a proposed performance criterion metric. The aim is to study the effect of selecting significant feature pixels on the recognition accuracy of the above-mentioned three most popular and widely used object recognition methods. We also introduce a performance criterion that we denote by saturation to evaluate the robustness of the classification/prediction accuracy of these classification methods. We show here that the use of the 2-sample t-test to choose feature pixels and reorganizing these chosen features based upon proposed performance criterion metrics results in many instances in enhancing and stabilizing the recognition results. This paper also introduces for the first time the terms EigenFruit and FisherFruit for eigenvalue based fruit classification and prediction analysis.","PeriodicalId":228132,"journal":{"name":"2014 IEEE International Conference on Computational Intelligence and Virtual Environments for Measurement Systems and Applications (CIVEMSA)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134341107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}