Anuj Dimri, Suraj Yerramilli, Peng Lee, Sardar Afra, Andrew Jakubowski
{"title":"Enhancing Claims Handling Processes with Insurance Based Language Models","authors":"Anuj Dimri, Suraj Yerramilli, Peng Lee, Sardar Afra, Andrew Jakubowski","doi":"10.1109/ICMLA.2019.00284","DOIUrl":null,"url":null,"abstract":"Insurance companies manage a large number of claims on a daily basis as new claims are reported and existing claims are serviced. A key component for servicing a claim is the ability for Claims personnel to enter in raw text, aka claims notes. Claims notes contain invaluable information often beyond that of structured data, capturing this information in a machine learning setting offers remarkable benefits to many downstream tasks in a Claims department. The ability to leverage claims notes enables an insurance company not only to make data-driven and insightful decisions while handling claims, but to create value through working more efficiently and serve their customers more effectively. To best leverage the information contained claims notes, we develop insurance-based language models (IBLMs) by further pre-training existing general domain language models (ULMFiT and BERT) on a large number of claim notes with enhanced vocabulary. Furthermore, we tested these IBLMs against three downstream binary classification tasks: (1) identification of auto claims with attorney retention, (2) bodily injury prediction, and (3) auto claims fraud investigation detection. We train different classifiers based on claims notes available on day 1 and through day 10 from when the claim was reported. We found that IBLMs show a significant improvement over the traditional classification approaches. Further, we provide practical insight into how an insurance company might use these models through the analysis of volume (capacity) thresholds.","PeriodicalId":436714,"journal":{"name":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","volume":"46 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICMLA.2019.00284","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1
Abstract
Insurance companies manage a large number of claims on a daily basis as new claims are reported and existing claims are serviced. A key component for servicing a claim is the ability for Claims personnel to enter in raw text, aka claims notes. Claims notes contain invaluable information often beyond that of structured data, capturing this information in a machine learning setting offers remarkable benefits to many downstream tasks in a Claims department. The ability to leverage claims notes enables an insurance company not only to make data-driven and insightful decisions while handling claims, but to create value through working more efficiently and serve their customers more effectively. To best leverage the information contained claims notes, we develop insurance-based language models (IBLMs) by further pre-training existing general domain language models (ULMFiT and BERT) on a large number of claim notes with enhanced vocabulary. Furthermore, we tested these IBLMs against three downstream binary classification tasks: (1) identification of auto claims with attorney retention, (2) bodily injury prediction, and (3) auto claims fraud investigation detection. We train different classifiers based on claims notes available on day 1 and through day 10 from when the claim was reported. We found that IBLMs show a significant improvement over the traditional classification approaches. Further, we provide practical insight into how an insurance company might use these models through the analysis of volume (capacity) thresholds.