Kai Xiong, Rui Wang, S. Leng, Quanxin Zhao, Meie Peng
{"title":"基于图卷积集成的城市空中交通分布式多视图学习","authors":"Kai Xiong, Rui Wang, S. Leng, Quanxin Zhao, Meie Peng","doi":"10.1109/ICCCWorkshops57813.2023.10233715","DOIUrl":null,"url":null,"abstract":"Urban Air Mobility (UAM) expands vehicles from the ground to the near-ground space, envisioned as a revolution for transportation systems. Comprehensive scene perception is the foundation for Autonomous Air Vehicles (AAV). However, AAV encounters a primary perception challenge: three-dimensional piloting makes the visual perception of AAVs easily obstructed by skyscrapers in urban. High perception learning requirements conflict with the view-limited visual information. To overcome the challenge, multi-view learning has been proposed to collect multi-view data to train the onboard deep learning model. But traditional multi-view learning is deployed on a single device operating centrally, which is difficult to deploy in dynamic environments. Accordingly, this paper proposes Graph Convolutional Network (GCN) based Distributed Multi-View learning (GCNDMV), taking account of GCN relation extractability to facilitate single-view representation learning integration. The proposed distributed multi-view learning framework allows distinct single-view representation learning integration. Moreover, due to the diversity gain of different single-view learning, various single-view representation learning of the GCN-DMV outperforms a homogeneous single-view representation learning of GCN-DMV in terms of recognition accuracy. Simulation experiments are conducted over a realistic multi-view dataset to verify the efficiency of the distributed multi-view learning framework.","PeriodicalId":201450,"journal":{"name":"2023 IEEE/CIC International Conference on Communications in China (ICCC Workshops)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Graph Convolutional Integration based Distributed Multi-View Learning in Urban Air Mobility\",\"authors\":\"Kai Xiong, Rui Wang, S. Leng, Quanxin Zhao, Meie Peng\",\"doi\":\"10.1109/ICCCWorkshops57813.2023.10233715\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Urban Air Mobility (UAM) expands vehicles from the ground to the near-ground space, envisioned as a revolution for transportation systems. Comprehensive scene perception is the foundation for Autonomous Air Vehicles (AAV). However, AAV encounters a primary perception challenge: three-dimensional piloting makes the visual perception of AAVs easily obstructed by skyscrapers in urban. High perception learning requirements conflict with the view-limited visual information. To overcome the challenge, multi-view learning has been proposed to collect multi-view data to train the onboard deep learning model. But traditional multi-view learning is deployed on a single device operating centrally, which is difficult to deploy in dynamic environments. Accordingly, this paper proposes Graph Convolutional Network (GCN) based Distributed Multi-View learning (GCNDMV), taking account of GCN relation extractability to facilitate single-view representation learning integration. The proposed distributed multi-view learning framework allows distinct single-view representation learning integration. Moreover, due to the diversity gain of different single-view learning, various single-view representation learning of the GCN-DMV outperforms a homogeneous single-view representation learning of GCN-DMV in terms of recognition accuracy. Simulation experiments are conducted over a realistic multi-view dataset to verify the efficiency of the distributed multi-view learning framework.\",\"PeriodicalId\":201450,\"journal\":{\"name\":\"2023 IEEE/CIC International Conference on Communications in China (ICCC Workshops)\",\"volume\":\"25 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-08-10\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/CIC International Conference on Communications in China (ICCC Workshops)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ICCCWorkshops57813.2023.10233715\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/CIC International Conference on Communications in China (ICCC Workshops)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICCCWorkshops57813.2023.10233715","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
摘要
城市空中交通(UAM)将车辆从地面扩展到近地面空间,被设想为交通系统的一场革命。综合场景感知是自主飞行器(Autonomous Air Vehicles, AAV)的基础。然而,AAV遇到了一个主要的感知挑战:三维驾驶使得AAV的视觉感知容易受到城市摩天大楼的阻碍。高感知学习要求与视域有限的视觉信息相冲突。为了克服这一挑战,多视图学习被提出来收集多视图数据来训练机载深度学习模型。但传统的多视图学习部署在单个集中运行的设备上,难以在动态环境中部署。因此,本文提出了基于图卷积网络(GCN)的分布式多视图学习(GCNDMV),考虑了GCN关系可提取性,以促进单视图表示学习的集成。提出的分布式多视图学习框架允许不同的单视图表示学习集成。此外,由于不同单视图学习的多样性增益,GCN-DMV的不同单视图表示学习在识别精度上优于GCN-DMV的同构单视图表示学习。在一个真实的多视图数据集上进行了仿真实验,验证了分布式多视图学习框架的有效性。
Graph Convolutional Integration based Distributed Multi-View Learning in Urban Air Mobility
Urban Air Mobility (UAM) expands vehicles from the ground to the near-ground space, envisioned as a revolution for transportation systems. Comprehensive scene perception is the foundation for Autonomous Air Vehicles (AAV). However, AAV encounters a primary perception challenge: three-dimensional piloting makes the visual perception of AAVs easily obstructed by skyscrapers in urban. High perception learning requirements conflict with the view-limited visual information. To overcome the challenge, multi-view learning has been proposed to collect multi-view data to train the onboard deep learning model. But traditional multi-view learning is deployed on a single device operating centrally, which is difficult to deploy in dynamic environments. Accordingly, this paper proposes Graph Convolutional Network (GCN) based Distributed Multi-View learning (GCNDMV), taking account of GCN relation extractability to facilitate single-view representation learning integration. The proposed distributed multi-view learning framework allows distinct single-view representation learning integration. Moreover, due to the diversity gain of different single-view learning, various single-view representation learning of the GCN-DMV outperforms a homogeneous single-view representation learning of GCN-DMV in terms of recognition accuracy. Simulation experiments are conducted over a realistic multi-view dataset to verify the efficiency of the distributed multi-view learning framework.