{"title":"Diagnosing Deep Self-localization Network for Domain-shift Localization","authors":"Tanaka Kanji","doi":"10.1109/IEEECONF49454.2021.9382702","DOIUrl":null,"url":null,"abstract":"Deep convolutional neural network (DCN) has become a common approach in visual robot self-localization. In a typical self-localization system, a DCN is trained as a visual place classifier from past visual experiences in the target environment. However, its classification performance can be deteriorated when it is tested in a different domain (e.g., times of day, weathers, seasons), due to domain shifts. Therefore, an efficient domain-adaptation (DA) approach to suppress perdomain DA cost would be desired. In this study, we address this issue with a novel “domain-shift localization (DSL)” technique that diagnosis the DCN classifier with the goal of localizing which region of the robot workspace is significantly affected by domain-shifts. In our approach, the DSL task is formulated as a fault-diagnosis (FD) problem, in which the deterioration of DCN-based self-localization for a given query image is viewed as an indicator of domain-shifts at the imaged region. In our contributions, we address the following non-trivial issues: (1) We address a subimage-level fine-grained DSL task given a typical coarse image-level DCN classifier, in which the target DCN system is queried with a region-of-interest (RoI) masked synthesized query image to diagnosis the RoI region; (2) We extend the DSL task to a relevance feedback (RF) framework, to perform a further query and return improved diagnosis results; and (3) We implement the proposed framework on 3D point cloud imagery-based self-localization and experimentally demonstrate the effectiveness of the proposed algorithm.","PeriodicalId":395378,"journal":{"name":"2021 IEEE/SICE International Symposium on System Integration (SII)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-01-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE/SICE International Symposium on System Integration (SII)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IEEECONF49454.2021.9382702","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Deep convolutional neural network (DCN) has become a common approach in visual robot self-localization. In a typical self-localization system, a DCN is trained as a visual place classifier from past visual experiences in the target environment. However, its classification performance can be deteriorated when it is tested in a different domain (e.g., times of day, weathers, seasons), due to domain shifts. Therefore, an efficient domain-adaptation (DA) approach to suppress perdomain DA cost would be desired. In this study, we address this issue with a novel “domain-shift localization (DSL)” technique that diagnosis the DCN classifier with the goal of localizing which region of the robot workspace is significantly affected by domain-shifts. In our approach, the DSL task is formulated as a fault-diagnosis (FD) problem, in which the deterioration of DCN-based self-localization for a given query image is viewed as an indicator of domain-shifts at the imaged region. In our contributions, we address the following non-trivial issues: (1) We address a subimage-level fine-grained DSL task given a typical coarse image-level DCN classifier, in which the target DCN system is queried with a region-of-interest (RoI) masked synthesized query image to diagnosis the RoI region; (2) We extend the DSL task to a relevance feedback (RF) framework, to perform a further query and return improved diagnosis results; and (3) We implement the proposed framework on 3D point cloud imagery-based self-localization and experimentally demonstrate the effectiveness of the proposed algorithm.