AAS Thematic issue: “Mortality: from Lee–Carter to AI”

IF 1.5 Q3 BUSINESS, FINANCE
Jennifer Alonso-García
{"title":"AAS Thematic issue: “Mortality: from Lee–Carter to AI”","authors":"Jennifer Alonso-García","doi":"10.1017/S1748499522000069","DOIUrl":null,"url":null,"abstract":"Exactly 11 years ago, Sweeting (2011) noted in his Editorial that “Even with the uncertainties around the choices of models and parameters, [stochastic mortality modeling] can be used to give a probabilistic assessment of the range of outcomes”. A quick read through past issues of Annals of Actuarial Science shows us that mortality modelling is still a hot topic in actuarial science, as evidenced in the multiple papers that have aimed at innovating towards the most suitable mathematical frameworks and model specifications. The past three decades have been characterised by a myriad of developments, Li & Lee (2005), Cairns et al. (2006), Renshaw & Haberman (2006) to cite a few, raising the need for a useful overview in both modelling and forecasting. Booth & Tickle (2008), in their exhaustive work, review the main methodological developments in (stochastic) mortality modelling from 1980 onwards focusing not only on Lee–Carter or GLM-based methodologies but also on parametric models and old-age mortality. In the same vein, Li (2014) focuses exclusively on simulation strategies. After sticking to a Lee & Carter (1992) model, and given the explosion of scientific papers focusing on how to best account for forecasting uncertainty, Li (2014) asks the simple question: What is the best performing simulation strategy? The answer is: it depends on the model fit; furthermore the choice of forecasting procedure matters. Clearly, attention has to be put into how the base model fits the data before focusing on the forecast. If there are unusual patterns in the residuals caused, e.g. by a non-captured cohort effect, the results produced by different simulation techniques could vary substantially. There is consensus about residuals needing to be pattern-free for a model to be well performing. This observation motivated Renshaw & Haberman (2006) to generalize the classical Lee & Carter (1992) model, adding a cohort component. They show that adding such a cohort effect renders the residual plots pattern-free. However, since cohort is directly related to age and period, identifiability issues arise due to the collinearity between these three parameters. This could be particularly problematic when projecting future mortality rates. Hunt & Blake (2020) focus on this particular issue. They highlight that some identifiability constraints are arbitrary and have an impact on the trend of particular parameters. Hence, they propose to determine which features of the parameters are data driven or choice driven. Based only on the data-driven trends, a selection for the time series should be done, ensuring that the forecast does not depend on arbitrary choices. Another way of studying mortality is not by extrapolating aggregate trends with a suitable model, but by studying the underlying causes of death. This allows for an analysis of causal mortality, as well as the dependence between different competing causes. Indeed, if you die from cardiovascular disease, you simply cannot have also died in a car accident. Alai et al. (2015) present a multinomial logistic framework to incorporate cause of death into mortality analysis. As others in the literature, they obtain estimates that are more conservative with regard to longevity,","PeriodicalId":44135,"journal":{"name":"Annals of Actuarial Science","volume":"17 1","pages":"212 - 214"},"PeriodicalIF":1.5000,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Annals of Actuarial Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1017/S1748499522000069","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"BUSINESS, FINANCE","Score":null,"Total":0}
引用次数: 1

Abstract

Exactly 11 years ago, Sweeting (2011) noted in his Editorial that “Even with the uncertainties around the choices of models and parameters, [stochastic mortality modeling] can be used to give a probabilistic assessment of the range of outcomes”. A quick read through past issues of Annals of Actuarial Science shows us that mortality modelling is still a hot topic in actuarial science, as evidenced in the multiple papers that have aimed at innovating towards the most suitable mathematical frameworks and model specifications. The past three decades have been characterised by a myriad of developments, Li & Lee (2005), Cairns et al. (2006), Renshaw & Haberman (2006) to cite a few, raising the need for a useful overview in both modelling and forecasting. Booth & Tickle (2008), in their exhaustive work, review the main methodological developments in (stochastic) mortality modelling from 1980 onwards focusing not only on Lee–Carter or GLM-based methodologies but also on parametric models and old-age mortality. In the same vein, Li (2014) focuses exclusively on simulation strategies. After sticking to a Lee & Carter (1992) model, and given the explosion of scientific papers focusing on how to best account for forecasting uncertainty, Li (2014) asks the simple question: What is the best performing simulation strategy? The answer is: it depends on the model fit; furthermore the choice of forecasting procedure matters. Clearly, attention has to be put into how the base model fits the data before focusing on the forecast. If there are unusual patterns in the residuals caused, e.g. by a non-captured cohort effect, the results produced by different simulation techniques could vary substantially. There is consensus about residuals needing to be pattern-free for a model to be well performing. This observation motivated Renshaw & Haberman (2006) to generalize the classical Lee & Carter (1992) model, adding a cohort component. They show that adding such a cohort effect renders the residual plots pattern-free. However, since cohort is directly related to age and period, identifiability issues arise due to the collinearity between these three parameters. This could be particularly problematic when projecting future mortality rates. Hunt & Blake (2020) focus on this particular issue. They highlight that some identifiability constraints are arbitrary and have an impact on the trend of particular parameters. Hence, they propose to determine which features of the parameters are data driven or choice driven. Based only on the data-driven trends, a selection for the time series should be done, ensuring that the forecast does not depend on arbitrary choices. Another way of studying mortality is not by extrapolating aggregate trends with a suitable model, but by studying the underlying causes of death. This allows for an analysis of causal mortality, as well as the dependence between different competing causes. Indeed, if you die from cardiovascular disease, you simply cannot have also died in a car accident. Alai et al. (2015) present a multinomial logistic framework to incorporate cause of death into mortality analysis. As others in the literature, they obtain estimates that are more conservative with regard to longevity,
美国科学学会专题:“死亡:从李-卡特到人工智能”
整整11年前,Sweeting(2011)在他的社论中指出,“即使模型和参数的选择存在不确定性,[随机死亡率建模]也可以用于对结果范围进行概率评估”。快速阅读过去几期的《精算学年鉴》可以发现,死亡率建模仍然是精算学中的一个热门话题,多篇旨在创新最合适的数学框架和模型规范的论文就证明了这一点。过去三十年的特点是有无数的发展,Li&Lee(2005)、Cairns等人(2006)、Renshaw&Haberman(2006)仅举几例,提出了在建模和预测方面进行有用概述的必要性。Booth&Tickle(2008)在其详尽的工作中回顾了自1980年以来(随机)死亡率建模的主要方法发展,不仅关注基于Lee–Carter或GLM的方法,还关注参数模型和老年死亡率。同样,李(2014)专注于模拟策略。在坚持Lee和Carter(1992)的模型之后,鉴于关注如何最好地解释预测不确定性的科学论文激增,李(2014)提出了一个简单的问题:什么是性能最好的模拟策略?答案是:这取决于模型的拟合;此外,预测程序的选择也很重要。显然,在关注预测之前,必须关注基础模型如何与数据相匹配。如果残差中存在异常模式,例如由未捕获的队列效应引起的,则不同模拟技术产生的结果可能会有很大差异。人们一致认为残差需要是无模式的,才能使模型表现良好。这一观察结果促使Renshaw和Haberman(2006)推广了经典的Lee和Carter(1992)模型,增加了队列成分。他们表明,添加这样的队列效应可以使残差图没有模式。然而,由于队列与年龄和时期直接相关,这三个参数之间的共线性导致了可识别性问题。在预测未来死亡率时,这可能特别成问题。Hunt&Blake(2020)专注于这一特定问题。他们强调,一些可识别性约束是任意的,并对特定参数的趋势产生影响。因此,他们建议确定参数的哪些特征是数据驱动的还是选择驱动的。应仅根据数据驱动的趋势选择时间序列,确保预测不依赖于任意选择。研究死亡率的另一种方法不是用合适的模型推断总体趋势,而是研究死亡的根本原因。这允许对因果死亡率以及不同竞争原因之间的依赖性进行分析。事实上,如果你死于心血管疾病,你根本不可能也死于车祸。Alai等人(2015)提出了一个多项逻辑框架,将死因纳入死亡率分析。与文献中的其他人一样,他们获得了关于寿命更保守的估计,
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 求助全文
来源期刊
CiteScore
3.10
自引率
5.90%
发文量
22
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信