Yingjie Niu, Ming Ding, Yuxiao Zhang, Maoning Ge, Hanting Yang, K. Takeda
{"title":"基于视觉语言嵌入多阶段多模态融合的开放世界驾驶场景分割","authors":"Yingjie Niu, Ming Ding, Yuxiao Zhang, Maoning Ge, Hanting Yang, K. Takeda","doi":"10.1109/IV55152.2023.10186652","DOIUrl":null,"url":null,"abstract":"In this study, a pixel-text level multi-stage multi-modality fusion segmentation method is proposed to make the open-world driving scene segmentation more efficient. It can be used for different semantic perceptual needs of autonomous driving scenarios for real-world driving situations. The method can finely segment unseen labels without additional corresponding semantic segmentation labels, only using the existing semantic segmentation data. The proposed method consists of 4 modules. A visual representation embedding module and a segmentation command embedding module are used to extract the driving scene and the segmentation category command. A multi-stage multi-modality fusion module is used to fuse the driving scene visual information and segmentation command text information for different sizes at the pixel-text level. Finally, a cascade segmentation head is used to ground the segmentation command text to the driving scene for encouraging the model to generate corresponding high-quality semantic segmentation results. In the experiment, we first verify the effectiveness of the method for zero-shot segmentation using a popular driving scene segmentation dataset. We also confirm the effectiveness of synonyms unseen label and hierarchy unseen label for the open-world semantic segmentation.","PeriodicalId":195148,"journal":{"name":"2023 IEEE Intelligent Vehicles Symposium (IV)","volume":"82 10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Open-world driving scene segmentation via multi-stage and multi-modality fusion of vision-language embedding\",\"authors\":\"Yingjie Niu, Ming Ding, Yuxiao Zhang, Maoning Ge, Hanting Yang, K. Takeda\",\"doi\":\"10.1109/IV55152.2023.10186652\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In this study, a pixel-text level multi-stage multi-modality fusion segmentation method is proposed to make the open-world driving scene segmentation more efficient. It can be used for different semantic perceptual needs of autonomous driving scenarios for real-world driving situations. The method can finely segment unseen labels without additional corresponding semantic segmentation labels, only using the existing semantic segmentation data. The proposed method consists of 4 modules. A visual representation embedding module and a segmentation command embedding module are used to extract the driving scene and the segmentation category command. A multi-stage multi-modality fusion module is used to fuse the driving scene visual information and segmentation command text information for different sizes at the pixel-text level. Finally, a cascade segmentation head is used to ground the segmentation command text to the driving scene for encouraging the model to generate corresponding high-quality semantic segmentation results. In the experiment, we first verify the effectiveness of the method for zero-shot segmentation using a popular driving scene segmentation dataset. We also confirm the effectiveness of synonyms unseen label and hierarchy unseen label for the open-world semantic segmentation.\",\"PeriodicalId\":195148,\"journal\":{\"name\":\"2023 IEEE Intelligent Vehicles Symposium (IV)\",\"volume\":\"82 10 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-06-04\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE Intelligent Vehicles Symposium (IV)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/IV55152.2023.10186652\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE Intelligent Vehicles Symposium (IV)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IV55152.2023.10186652","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Open-world driving scene segmentation via multi-stage and multi-modality fusion of vision-language embedding
In this study, a pixel-text level multi-stage multi-modality fusion segmentation method is proposed to make the open-world driving scene segmentation more efficient. It can be used for different semantic perceptual needs of autonomous driving scenarios for real-world driving situations. The method can finely segment unseen labels without additional corresponding semantic segmentation labels, only using the existing semantic segmentation data. The proposed method consists of 4 modules. A visual representation embedding module and a segmentation command embedding module are used to extract the driving scene and the segmentation category command. A multi-stage multi-modality fusion module is used to fuse the driving scene visual information and segmentation command text information for different sizes at the pixel-text level. Finally, a cascade segmentation head is used to ground the segmentation command text to the driving scene for encouraging the model to generate corresponding high-quality semantic segmentation results. In the experiment, we first verify the effectiveness of the method for zero-shot segmentation using a popular driving scene segmentation dataset. We also confirm the effectiveness of synonyms unseen label and hierarchy unseen label for the open-world semantic segmentation.