{"title":"Mortality models incorporating long memory for life table estimation: a comprehensive analysis","authors":"Hongxuan Yan, G. Peters, J. Chan","doi":"10.1017/S1748499521000014","DOIUrl":"https://doi.org/10.1017/S1748499521000014","url":null,"abstract":"Abstract Mortality projection and forecasting of life expectancy are two important aspects of the study of demography and life insurance modelling. We demonstrate in this work the existence of long memory in mortality data. Furthermore, models incorporating long memory structure provide a new approach to enhance mortality forecasts in terms of accuracy and reliability, which can improve the understanding of mortality. Novel mortality models are developed by extending the Lee–Carter (LC) model for death counts to incorporate a long memory time series structure. To link our extensions to existing actuarial work, we detail the relationship between the classical models of death counts developed under a Generalised Linear Model (GLM) formulation and the extensions we propose that are developed under an extension to the GLM framework known in time series literature as the Generalised Linear Autoregressive Moving Average (GLARMA) regression models. Bayesian inference is applied to estimate the model parameters. The Deviance Information Criterion (DIC) is evaluated to select between different LC model extensions of our proposed models in terms of both in-sample fits and out-of-sample forecasts performance. Furthermore, we compare our new models against existing models structures proposed in the literature when applied to the analysis of death count data sets from 16 countries divided according to genders and age groups. Estimates of mortality rates are applied to calculate life expectancies when constructing life tables. By comparing different life expectancy estimates, results show the LC model without the long memory component may provide underestimates of life expectancy, while the long memory model structure extensions reduce this effect. In summary, it is valuable to investigate how the long memory feature in mortality influences life expectancies in the construction of life tables.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"15 1","pages":"567 - 604"},"PeriodicalIF":1.7,"publicationDate":"2021-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499521000014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47228434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamic importance allocated nested simulation for variable annuity risk measurement","authors":"Ou Dang, M. Feng, M. Hardy","doi":"10.2139/ssrn.3738777","DOIUrl":"https://doi.org/10.2139/ssrn.3738777","url":null,"abstract":"Abstract Estimating tail risk measures for portfolios of complex variable annuities is an important enterprise risk management task which usually requires nested simulation. In the nested simulation, the outer simulation stage involves projecting scenarios of key risk factors under the real-world measure, while the inner simulations are used to value pay-offs under guarantees of varying complexity, under a risk-neutral measure. In this paper, we propose and analyse an efficient simulation approach that dynamically allocates the inner simulations to the specific outer scenarios that are most likely to generate larger losses. These scenarios are identified using a proxy calculation that is used only to rank the outer scenarios, not to estimate the tail risk measure directly. As the proxy ranking will not generally provide a perfect match to the true ranking of outer scenarios, we calculate a measure based on the concomitant of order statistics to test whether further tail scenarios are required to ensure, with given confidence, that the true tail scenarios are captured. This procedure, which we call the dynamic importance allocated nested simulation approach, automatically adjusts for the relationship between the proxy calculations and the true valuations and also signals when the proxy is not sufficiently accurate.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"16 1","pages":"319 - 348"},"PeriodicalIF":1.7,"publicationDate":"2020-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48116453","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sen Hu, A. O'Hagan, James Sweeney, Mohammadhossein Ghahramani
{"title":"A spatial machine learning model for analysing customers’ lapse behaviour in life insurance","authors":"Sen Hu, A. O'Hagan, James Sweeney, Mohammadhossein Ghahramani","doi":"10.1017/S1748499520000329","DOIUrl":"https://doi.org/10.1017/S1748499520000329","url":null,"abstract":"Abstract Spatial analysis ranges from simple univariate descriptive statistics to complex multivariate analyses and is typically used to investigate spatial patterns or to identify spatially linked consumer behaviours in insurance. This paper investigates if the incorporation of publicly available spatially linked demographic census data at population level is useful in modelling customers’ lapse behaviour (i.e. stopping payment of premiums) in life insurance policies, based on data provided by an insurance company in Ireland. From the insurance company’s perspective, identifying and assessing such lapsing risks in advance permit engagement to prevent such incidents, saving money by re-evaluating customer acquisition channels and improving capital reserve calculation and preparation. Incorporating spatial analysis in lapse modelling is expected to improve lapse prediction. Therefore, a hybrid approach to lapse prediction is proposed – spatial clustering using census data is used to reveal the underlying spatial structure of customers of the Irish life insurer, in conjunction with traditional statistical models for lapse prediction based on the company data. The primary contribution of this work is to consider the spatial characteristics of customers for life insurance lapse behaviour, via the integration of reliable government provided census demographics, which has not been considered previously in actuarial literature. Company decision-makers can use the insights gleaned from this analysis to identify customer subsets to target with personalized promotions to reduce lapse rates, and to reduce overall company risk.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"15 1","pages":"367 - 393"},"PeriodicalIF":1.7,"publicationDate":"2020-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499520000329","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47046648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Clustering driving styles via image processing","authors":"Rui Zhu, M. Wüthrich","doi":"10.1017/S1748499520000317","DOIUrl":"https://doi.org/10.1017/S1748499520000317","url":null,"abstract":"Abstract It has become of key interest in the insurance industry to understand and extract information from telematics car driving data. Telematics car driving data of individual car drivers can be summarised in so-called speed–acceleration heatmaps. The aim of this study is to cluster such speed–acceleration heatmaps to different categories by analysing similarities and differences in these heatmaps. Making use of local smoothness properties, we propose to process these heatmaps as RGB images. Clustering can then be achieved by involving supervised information via a transfer learning approach using the pre-trained AlexNet to extract discriminative features. The K-means algorithm is then applied on these extracted discriminative features for clustering. The experiment results in an improvement of heatmap clustering compared to classical approaches.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"15 1","pages":"276 - 290"},"PeriodicalIF":1.7,"publicationDate":"2020-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499520000317","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"57009894","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Imprecise credibility theory","authors":"Liang Hong, Ryan Martin","doi":"10.1017/S1748499521000117","DOIUrl":"https://doi.org/10.1017/S1748499521000117","url":null,"abstract":"Abstract The classical credibility theory is a cornerstone of experience rating, especially in the field of property and casualty insurance. An obstacle to putting the credibility theory into practice is the conversion of available prior information into a precise choice of crucial hyperparameters. In most real-world applications, the information necessary to justify a precise choice is lacking, so we propose an imprecise credibility estimator that honestly acknowledges the imprecision in the hyperparameter specification. This results in an interval estimator that is doubly robust in the sense that it retains the credibility estimator’s freedom from model specification and fast asymptotic concentration, while simultaneously being insensitive to prior hyperparameter specification.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"16 1","pages":"136 - 150"},"PeriodicalIF":1.7,"publicationDate":"2020-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499521000117","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44777579","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mortality forecasting using a Lexis-based state-space model","authors":"Patrik Andersson, M. Lindholm","doi":"10.1017/S1748499520000275","DOIUrl":"https://doi.org/10.1017/S1748499520000275","url":null,"abstract":"Abstract A new method of forecasting mortality is introduced. The method is based on the continuous-time dynamics of the Lexis diagram, which given weak assumptions implies that the death count data are Poisson distributed. The underlying mortality rates are modelled with a hidden Markov model (HMM) which enables a fully likelihood-based inference. Likelihood inference is done by particle filter methods, which avoids approximating assumptions and also suggests natural model validation measures. The proposed model class contains as special cases many previous models with the important difference that the HMM methods make it possible to estimate the model efficiently. Another difference is that the population and latent variable variability can be explicitly modelled and estimated. Numerical examples show that the model performs well and that inefficient estimation methods can severely affect forecasts.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"15 1","pages":"519 - 548"},"PeriodicalIF":1.7,"publicationDate":"2020-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499520000275","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46613529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A review on Poisson, Cox, Hawkes, shot-noise Poisson and dynamic contagion process and their compound processes","authors":"Jiwook Jang, Rosy Oh","doi":"10.1017/S1748499520000287","DOIUrl":"https://doi.org/10.1017/S1748499520000287","url":null,"abstract":"Abstract The Poisson process is an essential building block to move up to complicated counting processes, such as the Cox (“doubly stochastic Poisson”) process, the Hawkes (“self-exciting”) process, exponentially decaying shot-noise Poisson (simply “shot-noise Poisson”) process and the dynamic contagion process. The Cox process provides flexibility by letting the intensity not only depending on time but also allowing it to be a stochastic process. The Hawkes process has self-exciting property and clustering effects. Shot-noise Poisson process is an extension of the Poisson process, where it is capable of displaying the frequency, magnitude and time period needed to determine the effect of points. The dynamic contagion process is a point process, where its intensity generalises the Hawkes process and Cox process with exponentially decaying shot-noise intensity. To facilitate the usage of these processes in practice, we revisit the distributional properties of the Poisson, Cox, Hawkes, shot-noise Poisson and dynamic contagion process and their compound processes. We provide simulation algorithms for these processes, which would be useful to statistical analysis, further business applications and research. As an application of the compound processes, numerical comparisons of value-at-risk and tail conditional expectation are made.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"15 1","pages":"623 - 644"},"PeriodicalIF":1.7,"publicationDate":"2020-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499520000287","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41394269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jessye Maxwell, Richard A. Russell, H. Wu, N. Sharapova, Peter Banthorpe, Paul F O'Reilly, C. Lewis
{"title":"Multifactorial disorders and polygenic risk scores: predicting common diseases and the possibility of adverse selection in life and protection insurance – CORRIGENDUM","authors":"Jessye Maxwell, Richard A. Russell, H. Wu, N. Sharapova, Peter Banthorpe, Paul F O'Reilly, C. Lewis","doi":"10.1017/s1748499520000299","DOIUrl":"https://doi.org/10.1017/s1748499520000299","url":null,"abstract":"","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"15 1","pages":"504 - 504"},"PeriodicalIF":1.7,"publicationDate":"2020-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/s1748499520000299","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44305407","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI in actuarial science – a review of recent advances – part 2","authors":"Ronald Richman","doi":"10.1017/S174849952000024X","DOIUrl":"https://doi.org/10.1017/S174849952000024X","url":null,"abstract":"Abstract Rapid advances in artificial intelligence (AI) and machine learning are creating products and services with the potential not only to change the environment in which actuaries operate, but also to provide new opportunities within actuarial science. These advances are based on a modern approach to designing, fitting and applying neural networks, generally referred to as “Deep Learning”. This paper investigates how actuarial science may adapt and evolve in the coming years to incorporate these new techniques and methodologies. Part 1 of this paper provides background on machine learning and deep learning, as well as an heuristic for where actuaries might benefit from applying these techniques. Part 2 of the paper then surveys emerging applications of AI in actuarial science, with examples from mortality modelling, claims reserving, non-life pricing and telematics. For some of the examples, code has been provided on GitHub so that the interested reader can experiment with these techniques for themselves. Part 2 concludes with an outlook on the potential for actuaries to integrate deep learning into their activities. Finally, a supplementary appendix discusses further resources providing more in-depth background on machine learning and deep learning.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"15 1","pages":"230 - 258"},"PeriodicalIF":1.7,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S174849952000024X","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45034473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"AI in actuarial science – a review of recent advances – part 1","authors":"Ronald Richman","doi":"10.1017/S1748499520000238","DOIUrl":"https://doi.org/10.1017/S1748499520000238","url":null,"abstract":"Abstract Rapid advances in artificial intelligence (AI) and machine learning are creating products and services with the potential not only to change the environment in which actuaries operate but also to provide new opportunities within actuarial science. These advances are based on a modern approach to designing, fitting and applying neural networks, generally referred to as “Deep Learning.” This paper investigates how actuarial science may adapt and evolve in the coming years to incorporate these new techniques and methodologies. Part 1 of this paper provides background on machine learning and deep learning, as well as an heuristic for where actuaries might benefit from applying these techniques. Part 2 of the paper then surveys emerging applications of AI in actuarial science, with examples from mortality modelling, claims reserving, non-life pricing and telematics. For some of the examples, code has been provided on GitHub so that the interested reader can experiment with these techniques for themselves. Part 2 concludes with an outlook on the potential for actuaries to integrate deep learning into their activities. Finally, a supplementary appendix discusses further resources providing more in-depth background on machine learning and deep learning.","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"15 1","pages":"207 - 229"},"PeriodicalIF":1.7,"publicationDate":"2020-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1017/S1748499520000238","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47533649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}