Daniel Buljeta, M. Vranješ, Z. Marceta, J. Kovacevic
{"title":"停车辅助系统的环视算法","authors":"Daniel Buljeta, M. Vranješ, Z. Marceta, J. Kovacevic","doi":"10.1109/ZINC.2019.8769410","DOIUrl":null,"url":null,"abstract":"Modern vehicles use different advanced driver-assistance systems (ADAS) in order to make driving safer and more comfortable. One of them is a system that uses a set of in-vehicle cameras and provides the driver a top-view image of the space around the vehicle, thus helping the driver in parking. The main part of that system is the algorithm which processes the frames acquired simultaneously from four in-vehicle cameras located at different sides of the vehicle and creates the final top-view image of the space around the vehicle. In this paper an new algorithm for that purpose is designed, which consists of two main parts, one for performing camera calibration, the other for generating the top-view image. The implemented camera calibration is based on calibration patterns and calculates camera parameters which are then used to eliminate image distortion due to usage of fish-eye cameras. For generating the top-view image of the space around the vehicle, different geometric operations including distortion correction, perspective transformation, and image stitching, are applied to the input frames acquired by different cameras. Algorithm performance are tested using four real automotive fish-eye cameras fixed to the vehicle model and connected to the ADAS development board.","PeriodicalId":190326,"journal":{"name":"2019 Zooming Innovation in Consumer Technologies Conference (ZINC)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Surround view algorithm for parking assist system\",\"authors\":\"Daniel Buljeta, M. Vranješ, Z. Marceta, J. Kovacevic\",\"doi\":\"10.1109/ZINC.2019.8769410\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Modern vehicles use different advanced driver-assistance systems (ADAS) in order to make driving safer and more comfortable. One of them is a system that uses a set of in-vehicle cameras and provides the driver a top-view image of the space around the vehicle, thus helping the driver in parking. The main part of that system is the algorithm which processes the frames acquired simultaneously from four in-vehicle cameras located at different sides of the vehicle and creates the final top-view image of the space around the vehicle. In this paper an new algorithm for that purpose is designed, which consists of two main parts, one for performing camera calibration, the other for generating the top-view image. The implemented camera calibration is based on calibration patterns and calculates camera parameters which are then used to eliminate image distortion due to usage of fish-eye cameras. For generating the top-view image of the space around the vehicle, different geometric operations including distortion correction, perspective transformation, and image stitching, are applied to the input frames acquired by different cameras. Algorithm performance are tested using four real automotive fish-eye cameras fixed to the vehicle model and connected to the ADAS development board.\",\"PeriodicalId\":190326,\"journal\":{\"name\":\"2019 Zooming Innovation in Consumer Technologies Conference (ZINC)\",\"volume\":\"28 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2019-05-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2019 Zooming Innovation in Consumer Technologies Conference (ZINC)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/ZINC.2019.8769410\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Zooming Innovation in Consumer Technologies Conference (ZINC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ZINC.2019.8769410","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Modern vehicles use different advanced driver-assistance systems (ADAS) in order to make driving safer and more comfortable. One of them is a system that uses a set of in-vehicle cameras and provides the driver a top-view image of the space around the vehicle, thus helping the driver in parking. The main part of that system is the algorithm which processes the frames acquired simultaneously from four in-vehicle cameras located at different sides of the vehicle and creates the final top-view image of the space around the vehicle. In this paper an new algorithm for that purpose is designed, which consists of two main parts, one for performing camera calibration, the other for generating the top-view image. The implemented camera calibration is based on calibration patterns and calculates camera parameters which are then used to eliminate image distortion due to usage of fish-eye cameras. For generating the top-view image of the space around the vehicle, different geometric operations including distortion correction, perspective transformation, and image stitching, are applied to the input frames acquired by different cameras. Algorithm performance are tested using four real automotive fish-eye cameras fixed to the vehicle model and connected to the ADAS development board.