2006 9th International Conference on Control, Automation, Robotics and Vision最新文献

筛选
英文 中文
Emotional Communication with the Robot Head MEXI 与机器人头的情感交流
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345162
Natascha Esau, L. Kleinjohann, B. Kleinjohann
{"title":"Emotional Communication with the Robot Head MEXI","authors":"Natascha Esau, L. Kleinjohann, B. Kleinjohann","doi":"10.1109/ICARCV.2006.345162","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345162","url":null,"abstract":"This paper presents the robot head MEXI which is able to communicate to humans in an emotional way. MEXI recognizes emotions of its human counterpart from the prososdy of his or her natural speech using a fuzzy rule based approach. MEXI reacts on its perceptions by showing artificial emotions in its facial expressions and in the prosody of its synthesized natural speech. MEXI does not rely on a world model to control and plan its actions like usual goal based agents. Instead MEXI uses its internal state consisting of emotions and drives to evaluate its perceptions and action alternatives and controls its behavior on the basis of this evaluation. For MEXI, the behavior based programming paradigm originally developed by Arkin for robot navigation was extended to support a multidimensional control architecture based on emotions and drives","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127225193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 11
Flexible Fuzzy Co-clustering with Feature-cluster Weighting 基于特征聚类加权的柔性模糊共聚类
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345069
William-Chandra Tjhi, Lihui Chen
{"title":"Flexible Fuzzy Co-clustering with Feature-cluster Weighting","authors":"William-Chandra Tjhi, Lihui Chen","doi":"10.1109/ICARCV.2006.345069","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345069","url":null,"abstract":"Fuzzy co-clustering is an unsupervised technique that performs simultaneous fuzzy clustering of objects and features. In this paper, we propose a new flexible fuzzy co-clustering algorithm which incorporates feature-cluster weighting in the formulation. We call it Flexible Fuzzy Co-clustering with Feature-cluster Weighting (FFCFW). By flexible we mean the algorithm allows the number of object clusters to be different from the number of feature clusters. There are two motivations behind this work. First, in the fuzzy framework, many co-clustering algorithms still require the number of object clusters to be the same as the number of feature clusters. This is despite the fact that such rigid structure is hardly found in real-world applications. The second motivation is that while there have been numerous attempts for flexible co-clustering, it is common that in such scheme the relationships between object and feature clusters are not clearly represented. For this reason we incorporate a feature-cluster weighting scheme for each object cluster generated by FFCFW so that the relationships between the two types of clusters are manifested in the feature-cluster weights. This enables the new algorithm to generate more accurate representation of fuzzy co-clusters. FFCFW is formulated by fusing together the core components of two existing algorithms. Like its predecessors, FFCFW adopts an iterative optimization procedure. We discuss in details the derivation of the proposed algorithm and the advantages it has over other existing works. Experiments on several large benchmark document datasets reveal the feasibility of our proposed algorithm","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127330566","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Communication Robots in the Network Robot Framework 网络机器人框架中的通信机器人
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345253
N. Hagita
{"title":"Communication Robots in the Network Robot Framework","authors":"N. Hagita","doi":"10.1109/ICARCV.2006.345253","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345253","url":null,"abstract":"This paper discusses communication robots as next-generation media of communication. The \"Network Robots\", a new framework for integrating ubiquitous network and robot technologies, is a step towards providing infrastructure to make robots into communication media. Based on this framework, communication robots have come into greater use by being networked with humans, cell-phones and ubiquitous sensors (cameras, tags, wearable computers, etc.). This paper introduces the communication robots developed at ATR. Three kinds of field experiments at elementary school, at a science museum, and a Japanese comedy conversation by two robots, called \"Robot Manzai\" are also introduced. The results of field experiments indicate the potential of network robots as communication media","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127336827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Coordination Optimization-based Variable Structure Control for Main Steam Pressure of Power Plant 基于协调优化的电厂主蒸汽压力变结构控制
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345208
Yang Yong, Luo An
{"title":"Coordination Optimization-based Variable Structure Control for Main Steam Pressure of Power Plant","authors":"Yang Yong, Luo An","doi":"10.1109/ICARCV.2006.345208","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345208","url":null,"abstract":"An optimal variable structure control (VSC) based on a coordination genetic algorithm has been developed. Steady-state error and control switching frequency are used to constitute the system performance indexes in the coordination optimization, while the tuning rate of boundary layer width (BLW) is employed as the optimization parameter. Then based on the mathematical relationship between BLW tuning law and steady state error, an optimized BLW tuning rate is added to the nonlinear control term of VSC. Simulation experiment results applied to the main steam pressure control (MSPC) of power plant show the comprehensive superiority of dynamical and static state performance by using the proposed controller over that of by using an optimized PID control. The proposed VSC system has better robustness against large parameter variations and disturbances. This succeeds in coordinately considering both chattering reduction and high-precision control in VSC","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124707069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Novel Conceptual Fish-like Robot Inspired by Rhinecanthus Aculeatus 一种新颖的概念鱼状机器人,灵感来自于无角犀
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345100
Tianjiang Hu, Guangming Wang, Lincheng Shen, Fei Li
{"title":"A Novel Conceptual Fish-like Robot Inspired by Rhinecanthus Aculeatus","authors":"Tianjiang Hu, Guangming Wang, Lincheng Shen, Fei Li","doi":"10.1109/ICARCV.2006.345100","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345100","url":null,"abstract":"This paper proposes a novel conceptual underwater bio-robot inspired by Rhinecanthus aculeatus, which belongs to median and/or paired fin (MPF) propulsion fish and impresses researchers with agility by cooperative undulation of the dorsal-and-anal fins. Such a fish-like robot is anticipated to outperform the conventional aquatic robots in maneuverability and stability for oceanic exploitation necessities, e.g. benthonic mineral exploration. To begin with, a specimen of R. aculeatus was filmed in a glass aquarium (150cm times 50cm times 60cm) in which artificial seawater was maintained at 26 degC or so. Afterwards, we analyzed a few characteristics in morphology and locomotion with image processing and other approaches. The morphological and kinematical bionic inspirations were summarized, and in succession, we elaborately delineated the design scheme of our conceptual robotic fish including the schematic architectures, the structural and outside form, and the undulatory multi-fin propulsor","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125003777","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Analysis of Vehicle Emissions and Prediction of Gross Emitter using Remote Sensing Data 基于遥感数据的汽车排放分析与总排放源预测
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345143
Jun Zeng, Huafang Guo, Yueming Hu, Tao Ye
{"title":"Analysis of Vehicle Emissions and Prediction of Gross Emitter using Remote Sensing Data","authors":"Jun Zeng, Huafang Guo, Yueming Hu, Tao Ye","doi":"10.1109/ICARCV.2006.345143","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345143","url":null,"abstract":"Interest has focused on the analysis of vehicle emission based on the remote sensing data during the last two decades. This paper proposes an artificial neural network model for predicting taxi gross emitters using remote sensing data. Firstly, it introduces the field test in Guangzhou, and then analyzes the various factors from the emission data. Secondly, after doing principal components analysis and selecting algorithm and architecture, the back-propagation neural network model with 8-17-1 architecture was established as the optimal approach. It gives a percentage of hits of 93%. Finally, comparison among our former research results and aggression analysis results were presented. The results show the potentiality and validity of the proposed method in the prediction of taxi gross emitters","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125060030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An Interconnection between Combined Classical Block Diagrams and Linear Fractional Transformation Block Diagrams 组合经典方框图与线性分数变换方框图的相互联系
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345215
P. Houlis, V. Sreeram
{"title":"An Interconnection between Combined Classical Block Diagrams and Linear Fractional Transformation Block Diagrams","authors":"P. Houlis, V. Sreeram","doi":"10.1109/ICARCV.2006.345215","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345215","url":null,"abstract":"In this paper, we will establish the relationship between a specific family of classical control systems and the linear fractional transformations. Those classical control systems may always be represented by linear fractional transformations, and vice versa, subject to certain conditions. A mathematical proof for this relationship is provided","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125106925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robot Space Exploration Using Peano Paths Generated by Self-Organizing Maps 基于自组织地图生成Peano路径的机器人空间探索
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345432
W. K. Lee, Wlodzislaw Duch, G. S. Ng
{"title":"Robot Space Exploration Using Peano Paths Generated by Self-Organizing Maps","authors":"W. K. Lee, Wlodzislaw Duch, G. S. Ng","doi":"10.1109/ICARCV.2006.345432","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345432","url":null,"abstract":"Autonomous exploration by a team of robots has many important applications in rescue operations, clearing of mine fields and other military applications, and even space exploration. With limited range of sensors robots have to divide exploration tasks among themselves working under multiple constraints. An optimal covering of two-dimensional area by robot trajectories requires formation of space-filling Peano curves. This may be achieved using self-organizing feature map (SOFM) algorithm. There are two steps involved in the proposed approach: first optimal trajectories are defined generating Peano curves for space of arbitrary shape using the SOFM algorithm, and second, robots are deployed for exploration based on selection of start/end nodes and radius of robot sensors. The same approach may be used to direct people or teams exploring some area in rescue operations. Tests simulations show that this approach achieves better coverage and faster exploration than competing algorithms","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126186144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
2D/3D Vision-Based Mango's Feature Extraction and Sorting 基于2D/3D视觉的芒果特征提取与分类
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345248
T. Chalidabhongse, Panitnat Yimyam, P. Sirisomboon
{"title":"2D/3D Vision-Based Mango's Feature Extraction and Sorting","authors":"T. Chalidabhongse, Panitnat Yimyam, P. Sirisomboon","doi":"10.1109/ICARCV.2006.345248","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345248","url":null,"abstract":"This paper describes a vision system that can extract 2D and 3D visual properties of mango such as size (length, width, and thickness), projected area, volume, and surface area from images and use them in sorting. The 2D/3D visual properties are extracted from multiple view images of mango. The images are first segmented to extract the silhouette regions of mango. The 2D visual properties are then measured from the top view silhouette as explained by Yimyam et al. (2005). The 3D mango volume reconstruction is done using volumetric caving on multiple silhouette images. First the cameras are calibrated to obtain the intrinsic and extrinsic camera parameters. Then the 3D volume voxels are crafted based on silhouette images of the fruit in multiple views. After craving all silhouettes, we obtain the coarse 3D shape of the fruit and then we can compute the volume and surface area. We then use these features in automatic mango sorting which we employ a typical backpropagation neural networks. In this research, we employed the system to evaluate visual properties of a mango cultivar called \"Nam Dokmai\". There were two sets total of 182 mangoes in three various sizes sorted by weights according to a standard sorting metric for mango export. Two experiments were performed. One is for showing the accuracy of our vision-based feature extraction and measurement by comparing results with the measurements using various instruments. The second experiment is to show the sorting accuracy by comparing to human sorting. The results show the technique could be a good alternative and more feasible method for sorting mango comparing to human's manual sorting.","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127037707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 53
Dynamic Vision in the Dynamic Scene: An Algebraic Approach 动态场景中的动态视觉:一种代数方法
2006 9th International Conference on Control, Automation, Robotics and Vision Pub Date : 2006-12-01 DOI: 10.1109/ICARCV.2006.345439
Jianchao Yao
{"title":"Dynamic Vision in the Dynamic Scene: An Algebraic Approach","authors":"Jianchao Yao","doi":"10.1109/ICARCV.2006.345439","DOIUrl":"https://doi.org/10.1109/ICARCV.2006.345439","url":null,"abstract":"In this paper, we address the issue of dynamically recovering 3D position vectors of a moving target based on its images in three views under the following scenario: (1) dynamic vision in dynamic scene, meaning that both camera and target are in motion; (2) The distance between target and camera is extremely large, so that we have only one image observation of the moving target for each view, the recovery of relative motion of target with respect to camera is impossible. By imposing the kinematical and dynamic constraints on the motion of target, its future position and velocity, given previous ones can be described by what we call f and g function. By combining with motion of the platform, the slant range for the middle time (or middle view) can be estimated by solving an eight-order polynomial. Since the three position vectors corresponding to the three image points on the three views can be assumed to lie in a single plane (approximate true if interval of the consecutive frames is small, and absolutely true if we consider the target to be a satellite moving on its orbit), the other two position vectors can be further derived. Initial experimental results demonstrated the correctness of the developed algorithm","PeriodicalId":415827,"journal":{"name":"2006 9th International Conference on Control, Automation, Robotics and Vision","volume":"161 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2006-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115037501","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
相关产品
×
本文献相关产品
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信