M. Persson, T. Duckett, Christoffer Valgren, A. Lilienthal
{"title":"基于虚拟传感器的概率语义映射用于建筑/自然检测","authors":"M. Persson, T. Duckett, Christoffer Valgren, A. Lilienthal","doi":"10.1109/CIRA.2007.382870","DOIUrl":null,"url":null,"abstract":"In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between 'building' objects from 'nature' objects.","PeriodicalId":301626,"journal":{"name":"2007 International Symposium on Computational Intelligence in Robotics and Automation","volume":"81 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2007-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"26","resultStr":"{\"title\":\"Probabilistic Semantic Mapping with a Virtual Sensor for Building/Nature detection\",\"authors\":\"M. Persson, T. Duckett, Christoffer Valgren, A. Lilienthal\",\"doi\":\"10.1109/CIRA.2007.382870\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between 'building' objects from 'nature' objects.\",\"PeriodicalId\":301626,\"journal\":{\"name\":\"2007 International Symposium on Computational Intelligence in Robotics and Automation\",\"volume\":\"81 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2007-06-20\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"26\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2007 International Symposium on Computational Intelligence in Robotics and Automation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CIRA.2007.382870\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2007 International Symposium on Computational Intelligence in Robotics and Automation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CIRA.2007.382870","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Probabilistic Semantic Mapping with a Virtual Sensor for Building/Nature detection
In human-robot communication it is often important to relate robot sensor readings to concepts used by humans. We believe that access to semantic maps will make it possible for robots to better communicate information to a human operator and vice versa. The main contribution of this paper is a method that fuses data from different sensor modalities, range sensors and vision sensors are considered, to create a probabilistic semantic map of an outdoor environment. The method combines a learned virtual sensor (understood as one or several physical sensors with a dedicated signal processing unit for recognition of real world concepts) for building detection with a standard occupancy map. The virtual sensor is applied on a mobile robot, combining classifications of sub-images from a panoramic view with spatial information (location and orientation of the robot) giving the likely locations of buildings. This information is combined with an occupancy map to calculate a probabilistic semantic map. Our experiments with an outdoor mobile robot show that the method produces semantic maps with correct labeling and an evident distinction between 'building' objects from 'nature' objects.