{"title":"AAS Thematic issue: “Mortality: from Lee–Carter to AI”","authors":"Jennifer Alonso-García","doi":"10.1017/S1748499522000069","DOIUrl":null,"url":null,"abstract":"Exactly 11 years ago, Sweeting (2011) noted in his Editorial that “Even with the uncertainties around the choices of models and parameters, [stochastic mortality modeling] can be used to give a probabilistic assessment of the range of outcomes”. A quick read through past issues of Annals of Actuarial Science shows us that mortality modelling is still a hot topic in actuarial science, as evidenced in the multiple papers that have aimed at innovating towards the most suitable mathematical frameworks and model specifications. The past three decades have been characterised by a myriad of developments, Li & Lee (2005), Cairns et al. (2006), Renshaw & Haberman (2006) to cite a few, raising the need for a useful overview in both modelling and forecasting. Booth & Tickle (2008), in their exhaustive work, review the main methodological developments in (stochastic) mortality modelling from 1980 onwards focusing not only on Lee–Carter or GLM-based methodologies but also on parametric models and old-age mortality. In the same vein, Li (2014) focuses exclusively on simulation strategies. After sticking to a Lee & Carter (1992) model, and given the explosion of scientific papers focusing on how to best account for forecasting uncertainty, Li (2014) asks the simple question: What is the best performing simulation strategy? The answer is: it depends on the model fit; furthermore the choice of forecasting procedure matters. Clearly, attention has to be put into how the base model fits the data before focusing on the forecast. If there are unusual patterns in the residuals caused, e.g. by a non-captured cohort effect, the results produced by different simulation techniques could vary substantially. There is consensus about residuals needing to be pattern-free for a model to be well performing. This observation motivated Renshaw & Haberman (2006) to generalize the classical Lee & Carter (1992) model, adding a cohort component. They show that adding such a cohort effect renders the residual plots pattern-free. However, since cohort is directly related to age and period, identifiability issues arise due to the collinearity between these three parameters. This could be particularly problematic when projecting future mortality rates. Hunt & Blake (2020) focus on this particular issue. They highlight that some identifiability constraints are arbitrary and have an impact on the trend of particular parameters. Hence, they propose to determine which features of the parameters are data driven or choice driven. Based only on the data-driven trends, a selection for the time series should be done, ensuring that the forecast does not depend on arbitrary choices. Another way of studying mortality is not by extrapolating aggregate trends with a suitable model, but by studying the underlying causes of death. This allows for an analysis of causal mortality, as well as the dependence between different competing causes. Indeed, if you die from cardiovascular disease, you simply cannot have also died in a car accident. Alai et al. (2015) present a multinomial logistic framework to incorporate cause of death into mortality analysis. As others in the literature, they obtain estimates that are more conservative with regard to longevity,","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"17 1","pages":"212 - 214"},"PeriodicalIF":1.5000,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Actuarial Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/S1748499522000069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 1
Abstract
Exactly 11 years ago, Sweeting (2011) noted in his Editorial that “Even with the uncertainties around the choices of models and parameters, [stochastic mortality modeling] can be used to give a probabilistic assessment of the range of outcomes”. A quick read through past issues of Annals of Actuarial Science shows us that mortality modelling is still a hot topic in actuarial science, as evidenced in the multiple papers that have aimed at innovating towards the most suitable mathematical frameworks and model specifications. The past three decades have been characterised by a myriad of developments, Li & Lee (2005), Cairns et al. (2006), Renshaw & Haberman (2006) to cite a few, raising the need for a useful overview in both modelling and forecasting. Booth & Tickle (2008), in their exhaustive work, review the main methodological developments in (stochastic) mortality modelling from 1980 onwards focusing not only on Lee–Carter or GLM-based methodologies but also on parametric models and old-age mortality. In the same vein, Li (2014) focuses exclusively on simulation strategies. After sticking to a Lee & Carter (1992) model, and given the explosion of scientific papers focusing on how to best account for forecasting uncertainty, Li (2014) asks the simple question: What is the best performing simulation strategy? The answer is: it depends on the model fit; furthermore the choice of forecasting procedure matters. Clearly, attention has to be put into how the base model fits the data before focusing on the forecast. If there are unusual patterns in the residuals caused, e.g. by a non-captured cohort effect, the results produced by different simulation techniques could vary substantially. There is consensus about residuals needing to be pattern-free for a model to be well performing. This observation motivated Renshaw & Haberman (2006) to generalize the classical Lee & Carter (1992) model, adding a cohort component. They show that adding such a cohort effect renders the residual plots pattern-free. However, since cohort is directly related to age and period, identifiability issues arise due to the collinearity between these three parameters. This could be particularly problematic when projecting future mortality rates. Hunt & Blake (2020) focus on this particular issue. They highlight that some identifiability constraints are arbitrary and have an impact on the trend of particular parameters. Hence, they propose to determine which features of the parameters are data driven or choice driven. Based only on the data-driven trends, a selection for the time series should be done, ensuring that the forecast does not depend on arbitrary choices. Another way of studying mortality is not by extrapolating aggregate trends with a suitable model, but by studying the underlying causes of death. This allows for an analysis of causal mortality, as well as the dependence between different competing causes. Indeed, if you die from cardiovascular disease, you simply cannot have also died in a car accident. Alai et al. (2015) present a multinomial logistic framework to incorporate cause of death into mortality analysis. As others in the literature, they obtain estimates that are more conservative with regard to longevity,