{"title":"Evaluation of equity-linked products in the presence of policyholder surrender option using risk-control strategies","authors":"Patrice Gaillardetz, S. Hachem, Mehran Moghtadai","doi":"10.1017/S1748499521000051","DOIUrl":"https://doi.org/10.1017/S1748499521000051","url":null,"abstract":"Abstract Throughout the past couple of decades, the surge in the sale of equity-linked products has led to many discussions on the evaluation and risk management of surrender options embedded in these products. However, most studies treat such options as American/Bermudian style options. In this article, a different approach is presented where only a portion of the policyholders react optimally due to the belief that not all policyholders are rational. Through this method, a probability of surrender is obtained based on the option moneyness and the product is partially hedged using local risk-control strategies. This partial hedging approach is versatile since few assumptions are required for the financial framework. To compare the different surrender assumptions, the initial capital requirement for an equity-linked product is obtained under a regime-switching equity model. Numerical examples illustrate the dynamics and efficiency of this hedging approach.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2021-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499521000051","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46873482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Extracting information from textual descriptions for actuarial applications","authors":"S. Manski, Kaixu Yang, Gee Y. Lee, T. Maiti","doi":"10.1017/S1748499521000026","DOIUrl":"https://doi.org/10.1017/S1748499521000026","url":null,"abstract":"Abstract Initial insurance losses are often reported with a textual description of the claim. The claims manager must determine the adequate case reserve for each known claim. In this paper, we present a framework for predicting the amount of loss given a textual description of the claim using a large number of words found in the descriptions. Prior work has focused on classifying insurance claims based on keywords selected by a human expert, whereas in this paper the focus is on loss amount prediction with automatic word selection. In order to transform words into numeric vectors, we use word cosine similarities and word embedding matrices. When we consider all unique words found in the training dataset and impose a generalised additive model to the resulting explanatory variables, the resulting design matrix is high dimensional. For this reason, we use a group lasso penalty to reduce the number of coefficients in the model. The scalable, analytical framework proposed provides for a parsimonious and interpretable model. Finally, we discuss the implications of the analysis, including how the framework may be used by an insurance company and how the interpretation of the covariates can lead to significant policy change. The code can be found in the TAGAM R package (github.com/scottmanski/TAGAM).","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2021-03-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499521000026","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45183496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Modelling random vectors of dependent risks with different elliptical components","authors":"Z. Landsman, T. Shushi","doi":"10.1017/S1748499521000038","DOIUrl":"https://doi.org/10.1017/S1748499521000038","url":null,"abstract":"Abstract In Finance and Actuarial Science, the multivariate elliptical family of distributions is a famous and well-used model for continuous risks. However, it has an essential shortcoming: all its univariate marginal distributions are the same, up to location and scale transformations. For example, all marginals of the multivariate Student’s t-distribution, an important member of the elliptical family, have the same number of degrees of freedom. We introduce a new approach to generate a multivariate distribution whose marginals are elliptical random variables, while in general, each of the risks has different elliptical distribution, which is important when dealing with insurance and financial data. The proposal is an alternative to the elliptical copula distribution where, in many cases, it is very difficult to calculate its risk measures and risk capital allocation. We study the main characteristics of the proposed model: characteristic and density functions, expectations, covariance matrices and expectation of the linear regression vector. We calculate important risk measures for the introduced distributions, such as the value at risk and tail value at risk, and the risk capital allocation of the aggregated risks.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2021-02-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499521000038","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47168492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Functional disability with systematic trends and uncertainty: a comparison between China and the US","authors":"Yu-Hsiang Fu, M. Sherris, Mengyi Xu","doi":"10.2139/SSRN.3785743","DOIUrl":"https://doi.org/10.2139/SSRN.3785743","url":null,"abstract":"Abstract China and the US are two contrasting countries in terms of functional disability and long-term care. China is experiencing declining family support for long-term care and developing private long-term care insurance. The US has a more developed public aged care system and private long-term care insurance market than China. Changes in the demand for long-term care are driven by the levels, trends and uncertainty in mortality and functional disability. To understand the future potential demand for long-term care, we compare mortality and functional disability experiences in China and the US, using a multi-state latent factor intensity model with time trends and systematic uncertainty in transition rates. We estimate the model with the Chinese Longitudinal Healthy Longevity Survey (CLHLS) and the US Health and Retirement Study (HRS) data. The estimation results show that if trends continue, both countries will experience longevity improvement with morbidity compression and a declining proportion of the older population with functional disability. Although the elderly Chinese have a shorter estimated life expectancy, they are expected to spend a smaller proportion of their future lifetime functionally disabled than the elderly Americans. Systematic uncertainty is shown to be significant in future trends in disability rates and our model estimates higher uncertainty in trends for the Chinese elderly, especially for urban residents.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2021-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42904368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mortality models incorporating long memory for life table estimation: a comprehensive analysis","authors":"Hongxuan Yan, G. Peters, J. Chan","doi":"10.1017/S1748499521000014","DOIUrl":"https://doi.org/10.1017/S1748499521000014","url":null,"abstract":"Abstract Mortality projection and forecasting of life expectancy are two important aspects of the study of demography and life insurance modelling. We demonstrate in this work the existence of long memory in mortality data. Furthermore, models incorporating long memory structure provide a new approach to enhance mortality forecasts in terms of accuracy and reliability, which can improve the understanding of mortality. Novel mortality models are developed by extending the Lee–Carter (LC) model for death counts to incorporate a long memory time series structure. To link our extensions to existing actuarial work, we detail the relationship between the classical models of death counts developed under a Generalised Linear Model (GLM) formulation and the extensions we propose that are developed under an extension to the GLM framework known in time series literature as the Generalised Linear Autoregressive Moving Average (GLARMA) regression models. Bayesian inference is applied to estimate the model parameters. The Deviance Information Criterion (DIC) is evaluated to select between different LC model extensions of our proposed models in terms of both in-sample fits and out-of-sample forecasts performance. Furthermore, we compare our new models against existing models structures proposed in the literature when applied to the analysis of death count data sets from 16 countries divided according to genders and age groups. Estimates of mortality rates are applied to calculate life expectancies when constructing life tables. By comparing different life expectancy estimates, results show the LC model without the long memory component may provide underestimates of life expectancy, while the long memory model structure extensions reduce this effect. In summary, it is valuable to investigate how the long memory feature in mortality influences life expectancies in the construction of life tables.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499521000014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47228434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic importance allocated nested simulation for variable annuity risk measurement","authors":"Ou Dang, M. Feng, M. Hardy","doi":"10.2139/ssrn.3738777","DOIUrl":"https://doi.org/10.2139/ssrn.3738777","url":null,"abstract":"Abstract Estimating tail risk measures for portfolios of complex variable annuities is an important enterprise risk management task which usually requires nested simulation. In the nested simulation, the outer simulation stage involves projecting scenarios of key risk factors under the real-world measure, while the inner simulations are used to value pay-offs under guarantees of varying complexity, under a risk-neutral measure. In this paper, we propose and analyse an efficient simulation approach that dynamically allocates the inner simulations to the specific outer scenarios that are most likely to generate larger losses. These scenarios are identified using a proxy calculation that is used only to rank the outer scenarios, not to estimate the tail risk measure directly. As the proxy ranking will not generally provide a perfect match to the true ranking of outer scenarios, we calculate a measure based on the concomitant of order statistics to test whether further tail scenarios are required to ensure, with given confidence, that the true tail scenarios are captured. This procedure, which we call the dynamic importance allocated nested simulation approach, automatically adjusts for the relationship between the proxy calculations and the true valuations and also signals when the proxy is not sufficiently accurate.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48116453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sen Hu, A. O'Hagan, James Sweeney, Mohammadhossein Ghahramani
{"title":"A spatial machine learning model for analysing customers’ lapse behaviour in life insurance","authors":"Sen Hu, A. O'Hagan, James Sweeney, Mohammadhossein Ghahramani","doi":"10.1017/S1748499520000329","DOIUrl":"https://doi.org/10.1017/S1748499520000329","url":null,"abstract":"Abstract Spatial analysis ranges from simple univariate descriptive statistics to complex multivariate analyses and is typically used to investigate spatial patterns or to identify spatially linked consumer behaviours in insurance. This paper investigates if the incorporation of publicly available spatially linked demographic census data at population level is useful in modelling customers’ lapse behaviour (i.e. stopping payment of premiums) in life insurance policies, based on data provided by an insurance company in Ireland. From the insurance company’s perspective, identifying and assessing such lapsing risks in advance permit engagement to prevent such incidents, saving money by re-evaluating customer acquisition channels and improving capital reserve calculation and preparation. Incorporating spatial analysis in lapse modelling is expected to improve lapse prediction. Therefore, a hybrid approach to lapse prediction is proposed – spatial clustering using census data is used to reveal the underlying spatial structure of customers of the Irish life insurer, in conjunction with traditional statistical models for lapse prediction based on the company data. The primary contribution of this work is to consider the spatial characteristics of customers for life insurance lapse behaviour, via the integration of reliable government provided census demographics, which has not been considered previously in actuarial literature. Company decision-makers can use the insights gleaned from this analysis to identify customer subsets to target with personalized promotions to reduce lapse rates, and to reduce overall company risk.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2020-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499520000329","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47046648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Clustering driving styles via image processing","authors":"Rui Zhu, M. Wüthrich","doi":"10.1017/S1748499520000317","DOIUrl":"https://doi.org/10.1017/S1748499520000317","url":null,"abstract":"Abstract It has become of key interest in the insurance industry to understand and extract information from telematics car driving data. Telematics car driving data of individual car drivers can be summarised in so-called speed–acceleration heatmaps. The aim of this study is to cluster such speed–acceleration heatmaps to different categories by analysing similarities and differences in these heatmaps. Making use of local smoothness properties, we propose to process these heatmaps as RGB images. Clustering can then be achieved by involving supervised information via a transfer learning approach using the pre-trained AlexNet to extract discriminative features. The K-means algorithm is then applied on these extracted discriminative features for clustering. The experiment results in an improvement of heatmap clustering compared to classical approaches.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499520000317","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57009894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Imprecise credibility theory","authors":"Liang Hong, Ryan Martin","doi":"10.1017/S1748499521000117","DOIUrl":"https://doi.org/10.1017/S1748499521000117","url":null,"abstract":"Abstract The classical credibility theory is a cornerstone of experience rating, especially in the field of property and casualty insurance. An obstacle to putting the credibility theory into practice is the conversion of available prior information into a precise choice of crucial hyperparameters. In most real-world applications, the information necessary to justify a precise choice is lacking, so we propose an imprecise credibility estimator that honestly acknowledges the imprecision in the hyperparameter specification. This results in an interval estimator that is doubly robust in the sense that it retains the credibility estimator’s freedom from model specification and fast asymptotic concentration, while simultaneously being insensitive to prior hyperparameter specification.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499521000117","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44777579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mortality forecasting using a Lexis-based state-space model","authors":"Patrik Andersson, M. Lindholm","doi":"10.1017/S1748499520000275","DOIUrl":"https://doi.org/10.1017/S1748499520000275","url":null,"abstract":"Abstract A new method of forecasting mortality is introduced. The method is based on the continuous-time dynamics of the Lexis diagram, which given weak assumptions implies that the death count data are Poisson distributed. The underlying mortality rates are modelled with a hidden Markov model (HMM) which enables a fully likelihood-based inference. Likelihood inference is done by particle filter methods, which avoids approximating assumptions and also suggests natural model validation measures. The proposed model class contains as special cases many previous models with the important difference that the HMM methods make it possible to estimate the model efficiently. Another difference is that the population and latent variable variability can be explicitly modelled and estimated. Numerical examples show that the model performs well and that inefficient estimation methods can severely affect forecasts.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":null,"pages":null},"PeriodicalIF":1.7,"publicationDate":"2020-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499520000275","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46613529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}