L. J. Ricalde, E. H. Rubio, E. Ordonez, Lifter O. Ricalde
{"title":"Characterization for photovoltaic generation systems via Higher Order Wavelet Neural Networks","authors":"L. J. Ricalde, E. H. Rubio, E. Ordonez, Lifter O. Ricalde","doi":"10.1109/WAC.2014.6936021","DOIUrl":"https://doi.org/10.1109/WAC.2014.6936021","url":null,"abstract":"This paper focusses on applications of neural networks for forecasting in photovoltaic arrays. A Higher Order Wavelet Neural Network trained with an extended Kalman Filter training algorithm is implemented for data modeling in smart grids. The length of the regression vector is determined using the Cao methodology. The applicability of this architecture is illustrated via simulation using real data values from Photovoltaic modules.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"7 9","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120910446","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mohan Kumar, Satish Vaishnav, M. Jamshidi, M. Joordens
{"title":"AUV location detection in an enclosed environment","authors":"Mohan Kumar, Satish Vaishnav, M. Jamshidi, M. Joordens","doi":"10.1109/WAC.2014.6935995","DOIUrl":"https://doi.org/10.1109/WAC.2014.6935995","url":null,"abstract":"Normally, experiments are done in a controlled environment so that different systems under test can be isolated. The added benefit is that the sensors used are a lot more accurate under controlled conditions. In the experiments perform on underwater robot localization, this was not the case. The sonar localization equipment use perform flawlessly in open water as it was designed to do, but poorly in an indoor pool. It is believed that the sonar had too much power causing too many reflections in the enclosed space. Unfortunately the experiments are better done in a pool so as to control the elements under test. This paper is the search to improve the equipment's accuracy in an enclosed environment by attempting to reduce the power of the sonar via mechanical means.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116136646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On GPGPU parallel implementation of hands and arms motion estimation of a car driver with depth image sensor by particle filter","authors":"N. Ikoma","doi":"10.1109/WAC.2014.6935867","DOIUrl":"https://doi.org/10.1109/WAC.2014.6935867","url":null,"abstract":"GPGPU parallel computation technology has been combined with depth image sensor such as Microsoft Xbox360 KINECT for real-time estimation of car driver's hands and arms motion with an elaborated tracking method based on particle filter. Vision observation by KINECT including depth image provides more accurate hand/arm region information, so we can extend the motion estimation method, not only on hands/wrists region with skin color cue, but also on arms region not necessarily having skin color, based on a depth signal. In addition, with particle filter for state estimation in robust and in sequentially with GPGPU parallel implementation for real-time computation, it allows us to develop a real-time motion estimation system of a car driver. Contribution of this paper is twofold; 1) to provide whole summary of steering hands / arms motion estimation methods so far based on particle filters and partially with the aid of GPGPU technology, and 2) to propose a new system implementation of GPGPU parallel particle filter not only for hands/wrists region but also for arms region of a car driver with the aid of depth image sensor. Some experimental results have been shown with the proposed implementation.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134303531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Positive observer design for fractional order systems","authors":"B. Shafai, Amirreza Oghbaee","doi":"10.1109/WAC.2014.6936032","DOIUrl":"https://doi.org/10.1109/WAC.2014.6936032","url":null,"abstract":"This paper initially starts with an overview of fractional order representation of linear continuous-time systems and their stability analysis. The design of fractional order observers for fractional order system (FOS) is considered with the purpose of estimating the system states in feedback implementation. Using the stability conditions for FOS, design procedures for both fractional order observer of proportional and proportional integral types (Pα-Observer and PIα-Observer) are given with the aim of generalizing the conventional P- and PI-Observers. Finally, the problem of positive Pα-Observer design for positive FOS is formulated and solved using linear programming. The possibility of incorporating Pα-Observer and state feedback control law is also addressed for stabilization of FOS.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133073351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"3D plane detection for robot perception applying particle swarm optimization","authors":"H. Masuta, Shinichiro Makino, Hun-ok Lim","doi":"10.1109/WAC.2014.6936041","DOIUrl":"https://doi.org/10.1109/WAC.2014.6936041","url":null,"abstract":"This article describes a 3D plane detection method for an intelligent robot to perceive an unknown object with 3D range sensor. Previously, various method has been proposed to perceive unknown environment. However, the previous unknown object detection method has problems which are high computational costs and low-accuracy for small object. In order to solve the mentioned problems, we have proposed an online processable unknown object detection based on a 3D plane detection method. The proposed method consists of simple plane detection applying particle swarm optimization (PSO) with region growing (RG) and integrated object plane detection. The simple plane detection is focused on small plane detection and reducing computational costs. To improve the accuracy, we apply PSO and RG. And, integrated object plane detection focuses on stability of detecting plane. As experimental results, we show that the computational cost is reduced to be able to calculate in real time for robot operation. And, the proposed method detects small planes of specific objects. Furthermore, we discuss the capability of proposed method which coordinate the ability of reducing computational costs and improving the plane detection accuracy.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114407606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"4 Degree-of-Freedom haptic device for surgical simulation","authors":"Michael Mortimer, B. Horan, A. Stojcevski","doi":"10.1109/WAC.2014.6936126","DOIUrl":"https://doi.org/10.1109/WAC.2014.6936126","url":null,"abstract":"Haptic interaction is a growing area of research and enables users to interact with virtual objects using their haptic sensory modality. Haptically enabled surgical simulation can allow users to be trained with realistic haptic feedback. Commercial-Off-The-Shelf (COTS) haptic devices typically provide either 3 or 6-Degrees of Freedom (DOF). While low-cost 3-DOF COTS haptic devices do exist, in many surgical simulation scenarios haptic feedback in more than 3-DOF is required. This work presents a low-cost attachment for retrofitting a 4th DOF to the affordable 3-DOF Phantom Omni haptic device. The attachment allows the provision of torque feedback around the stylus' longitudinal axis lending itself to applications where 3-DOF Cartesian forces and 1-DOF torque feedback are required, such as surgical screw insertion. In order to integrate the 4th DOF attachment, the kinematics of the resulting device are considered, and the Jacobian determined. The workspace of the Phantom Omni is also considered as well as the effect of stylus orientation on the ability to display torque. Finally the generation of forces and torques for simulating pedicle screw insertion as required in scoliosis surgery is discussed.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114437277","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sanjeevan Sivapalan, A. Sadeghian, H. Rahnama, A. M. Madni
{"title":"Recommender systems in e-commerce","authors":"Sanjeevan Sivapalan, A. Sadeghian, H. Rahnama, A. M. Madni","doi":"10.1109/WAC.2014.6935763","DOIUrl":"https://doi.org/10.1109/WAC.2014.6935763","url":null,"abstract":"Internet is speeding up and modifying the manner in which daily tasks such as online shopping, paying utility bills, watching new movies, communicating, etc., are accomplished. As an example, in older shopping methods, products were mass produced for a single market and audience but that approach is no longer viable. Markets based on long product and development cycles can no longer survive. To stay competitive, markets need to provide different products and services to different customers with different needs. The shift to online shopping has made it incumbent on producers and retailers to customize for customers' needs while providing more options than were possible before. This, however, poses a problem for customers who must now analyze every offering in order to determine what they actually need and will benefit from. To aid customers in this scenario, we discuss about common recommender systems techniques that have been employed and their associated trade-offs.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124729305","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A pose graph based visual SLAM algorithm for robot pose estimation","authors":"Soonhac Hong, C. Ye","doi":"10.1109/WAC.2014.6936197","DOIUrl":"https://doi.org/10.1109/WAC.2014.6936197","url":null,"abstract":"This paper presents a pose graph based visual SLAM (Simultaneous Localization and Mapping) method for 6-DOF robot pose estimation. The method uses a fast ICP (Iterative Closest Point) algorithm to enhance a visual odometry for estimating the pose change of a 3D camera in a feature-sparse environment. It then constructs a graph using the pose changes computed by the improved visual odometry and employ a pose optimization process to obtain the optimal estimates of the camera poses. The proposed method is compared with an Extended Kalman Filter (EKF) based pose estimation method in both feature-rich environments and feature-sparse environments. The experimental results show that the graph based SLAM method has a more consistent performance than the EKF based method in visual feature-rich environments and it outperforms the EKF counterpart in feature-sparse environments.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"69 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124752923","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Estimation of user's hand motion based on EMG and EEG signals","authors":"K. Kiguchi, K. Tamura, Y. Hayashi","doi":"10.1109/WAC.2014.6936115","DOIUrl":"https://doi.org/10.1109/WAC.2014.6936115","url":null,"abstract":"A surface EMG signal is one of the most widely used signals as input signals to wearable robots. However, EMG signals that are used to estimate motions are not always available to all users. On the other hand, an EEG signal has drawn attention as input signals for those robots in recent years. The EEG signals can be measured even with amputees and paralyzed patients who are not able to generate some EMG signals. However, the measured EEG signal does not have one-to-one relationships with the corresponding brain part. Therefore, it is more difficult to find the required signals for the control of the robot in accordance with the intention of the user's motion using the EEG signals compared with that using the EMG signals. In this paper, both the EMG and EEG signals are used to estimate the user's motion intention. In the proposed method, the EMG signals are used as main input signals because the EMG signals have higher relative to the motion of a user in comparison with the EEG signals. The EEG signals are used as sub signals in order to cover the estimation of the intention of the user's motion when all required EMG signals cannot be measured. The effectiveness of the proposed method has been evaluated by performing experiments.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122968763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Navigation for autonomous robots in partially observable facilities","authors":"Henry I. Ibekwe, A. Kamrani","doi":"10.1109/WAC.2014.6936134","DOIUrl":"https://doi.org/10.1109/WAC.2014.6936134","url":null,"abstract":"Designing mobile robots that navigate indoor environments autonomously is known to be a difficult problem. A critical issue is in the formulation of robust motion control algorithms capable of reliably sensing partial or incomplete information from the environment and using this information to choose appropriate actions to achieve its designed goals. As an example, suppose we wish to deploy a mobile robot that autonomously patrols defined locations at a hazardous high-security facility. The robot must maintain accurate knowledge of its location, while using sensory data to recognize objects and obstacles in its immediate vicinity. Its task is to inspect the desired locations within a defined time period and provide real-time data in the event of an incident. The problem is thus to choose appropriate actions that result in accomplishing the patrol in a minimal amount of time in the partially structured environment. To solve this problem we adopt the Partially Observable Markov Decision Processes (POMDP) formalism to find near-optimal and efficient policies that provides a description the robot's motion in environments with incomplete state information. POMDP is a generalization of Markov Decision Processes (MDPs). It models a system as a coupling of an agent/decision maker (robots in our case) and an environment. We also present a methodology called Goal-Specific Representation (GSR) to reduce the size of the state-space for computational efficiency and propose an extension to the methodology.","PeriodicalId":196519,"journal":{"name":"2014 World Automation Congress (WAC)","volume":"154 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2014-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121683396","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}