Philippe Jardin, Ioannis Moisidis, Kürşat Kartal, S. Rinderknecht
{"title":"Adaptive Driving Style Classification through Transfer Learning with Synthetic Oversampling","authors":"Philippe Jardin, Ioannis Moisidis, Kürşat Kartal, S. Rinderknecht","doi":"10.3390/vehicles4040069","DOIUrl":null,"url":null,"abstract":"Driving style classification does not only depend on objective measures such as vehicle speed or acceleration, but is also highly subjective as drivers come with their own definition. From our perspective, the successful implementation of driving style classification in real-world applications requires an adaptive approach that is tuned to each driver individually. Within this work, we propose a transfer learning framework for driving style classification in which we use a previously developed rule-based algorithm for the initialization of the neural network weights and train on limited data. Therefore, we applied various state-of-the-art machine learning methods to ensure robust training. First, we performed heuristic-based feature engineering to enhance generalized feature building in the first layer. We then calibrated our network to be able to use its output as a probabilistic metric and to only give predictions above a predefined neural network confidence. To increase the robustness of the transfer learning in early increments, we used a synthetic oversampling technique. We then performed a holistic hyperparameter optimization in the form of a random grid search, which incorporated the entire learning framework from pretraining to incremental adaption. The final algorithm was then evaluated based on the data of predefined synthetic drivers. Our results showed that, by integrating these various methods, high system-level performance and robustness were met with as little as three new training and validation data samples in each increment.","PeriodicalId":73282,"journal":{"name":"IEEE Intelligent Vehicles Symposium. IEEE Intelligent Vehicles Symposium","volume":"43 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2022-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE Intelligent Vehicles Symposium. IEEE Intelligent Vehicles Symposium","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3390/vehicles4040069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Driving style classification does not only depend on objective measures such as vehicle speed or acceleration, but is also highly subjective as drivers come with their own definition. From our perspective, the successful implementation of driving style classification in real-world applications requires an adaptive approach that is tuned to each driver individually. Within this work, we propose a transfer learning framework for driving style classification in which we use a previously developed rule-based algorithm for the initialization of the neural network weights and train on limited data. Therefore, we applied various state-of-the-art machine learning methods to ensure robust training. First, we performed heuristic-based feature engineering to enhance generalized feature building in the first layer. We then calibrated our network to be able to use its output as a probabilistic metric and to only give predictions above a predefined neural network confidence. To increase the robustness of the transfer learning in early increments, we used a synthetic oversampling technique. We then performed a holistic hyperparameter optimization in the form of a random grid search, which incorporated the entire learning framework from pretraining to incremental adaption. The final algorithm was then evaluated based on the data of predefined synthetic drivers. Our results showed that, by integrating these various methods, high system-level performance and robustness were met with as little as three new training and validation data samples in each increment.