{"title":"A new way of deep learning combined with street view images for air pollutant concentration prediction","authors":"Jialiang Zhang, Xiaohai Qin, Ying Liu, Yubo Fan","doi":"10.1117/12.2605032","DOIUrl":null,"url":null,"abstract":"Given the complex spatial structure of urban streets, we use two deep semantic segmentation methods with highprecision to model with street view image data. Through segmentation and quantization, we obtain depth semantic segmentation prediction maps and realize pixel-level classification of multi-objects in the image in a global sense. To accurately and effectively evaluate the urban environmental air quality which is closely related to residents' health, the category target objects related to the predicted pollutant concentration in the image are established as eight categories. The segmentation results are combined with the gas quality data collected by the mobile machine to predict, which can give a set of air pollutant concentration prediction scheme for city management personnel for reference. In this study, a semantic segmentation network is adopted to extract the main environmental factors from street view images as feature vectors of gas prediction models. All the image data used in the experiment were collected in Augsburg, Germany. The sampling tool was a pinhole camera installed on a mobile trolley and set to capture an image every ten seconds. The experiment produced various environmental factors, then input them into the prediction model by combining with the air measurement data of the street view for pollutant prediction. This method can be used as a reference path for evaluating urban environmental quality, air indicators, and air pollutant concentrations.","PeriodicalId":90079,"journal":{"name":"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging","volume":"27 21 1","pages":"119130L - 119130L-6"},"PeriodicalIF":0.0000,"publicationDate":"2021-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"... International Workshop on Pattern Recognition in NeuroImaging. International Workshop on Pattern Recognition in NeuroImaging","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1117/12.2605032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Given the complex spatial structure of urban streets, we use two deep semantic segmentation methods with highprecision to model with street view image data. Through segmentation and quantization, we obtain depth semantic segmentation prediction maps and realize pixel-level classification of multi-objects in the image in a global sense. To accurately and effectively evaluate the urban environmental air quality which is closely related to residents' health, the category target objects related to the predicted pollutant concentration in the image are established as eight categories. The segmentation results are combined with the gas quality data collected by the mobile machine to predict, which can give a set of air pollutant concentration prediction scheme for city management personnel for reference. In this study, a semantic segmentation network is adopted to extract the main environmental factors from street view images as feature vectors of gas prediction models. All the image data used in the experiment were collected in Augsburg, Germany. The sampling tool was a pinhole camera installed on a mobile trolley and set to capture an image every ten seconds. The experiment produced various environmental factors, then input them into the prediction model by combining with the air measurement data of the street view for pollutant prediction. This method can be used as a reference path for evaluating urban environmental quality, air indicators, and air pollutant concentrations.