{"title":"Using Features Extracted From Vital Time Series for Early Prediction of Sepsis","authors":"Qiang Yu, Xiaolin Huang, Weifeng Li, Cheng Wang, Ying Chen, Yun Ge","doi":"10.23919/CinC49843.2019.9005646","DOIUrl":null,"url":null,"abstract":"To get early prediction of sepsis, we propose to extract more time-dependent characteristics that retain the temporal evolvement information of the underlying biomedical dynamic system, including differential, integration, time-dependent statistics, variations and convolutions.Considering that two categories are unbalanced in the training set, we employed easy ensemble algorithm to get multiple base learners. As for the base learner, we tried three models: random forest, XGBoost and LightGBM. By boosting the results of multiple base learners, we constructed our ensemble model.Our team which name is njuedu ranked 25th in the official test and scored 0.282 in full test set.Since the submitted model version only used training set A to train our model, the model had a higher score of 0.401 in test set A, and 0.278 in test set B, and only -0.207 points in test set C.","PeriodicalId":6697,"journal":{"name":"2019 Computing in Cardiology (CinC)","volume":"1 1","pages":"Page 1-Page 4"},"PeriodicalIF":0.0000,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 Computing in Cardiology (CinC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/CinC49843.2019.9005646","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
To get early prediction of sepsis, we propose to extract more time-dependent characteristics that retain the temporal evolvement information of the underlying biomedical dynamic system, including differential, integration, time-dependent statistics, variations and convolutions.Considering that two categories are unbalanced in the training set, we employed easy ensemble algorithm to get multiple base learners. As for the base learner, we tried three models: random forest, XGBoost and LightGBM. By boosting the results of multiple base learners, we constructed our ensemble model.Our team which name is njuedu ranked 25th in the official test and scored 0.282 in full test set.Since the submitted model version only used training set A to train our model, the model had a higher score of 0.401 in test set A, and 0.278 in test set B, and only -0.207 points in test set C.