{"title":"Flexible Weather Index Insurance Design with Penalized Splines","authors":"Ken Seng Tan, Jinggong Zhang","doi":"10.1080/10920277.2022.2162924","DOIUrl":"https://doi.org/10.1080/10920277.2022.2162924","url":null,"abstract":"In this article, we propose a flexible framework for the design of weather index insurance (WII) based on penalized spline methods. The aim is to find the indemnity function that optimally characterizes the intricate relationship between agricultural production losses and weather variables and thus effectively improves policyholders’ utilities. We use B-spline functions to define the feasible set of the optimization problem and a penalty function to avoid the “overfitting” issue. The proposed design framework is applied to an empirical study in which we use precipitation and vapor pressure deficit (VPD) to construct an index insurance contract for corn producers in Illinois. Numerical evidence shows that the resulting optimal insurance contract effectively enhances policyholder’s utility, even in the absence of the government’s premium subsidy. In addition, the performance of our proposed index insurance is robust to a variety of key factors, and the general payment structure is highly interpretable for marketing purposes. All of these merits indicate its potential to increase efficiency of the agricultural insurance market and thus enhance social welfare.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135205969","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Multivariate Insurance Portfolio Risk Retention Using the Method of Multipliers","authors":"Gee Y. Lee","doi":"10.1080/10920277.2022.2161578","DOIUrl":"https://doi.org/10.1080/10920277.2022.2161578","url":null,"abstract":"For an insurance company insuring multiple risks, capital allocation is an important practical problem. In the capital allocation problem, the insurance company must determine the amount of capital to assign to each policy or, equivalently, the amount of premium to be collected from each policy. Doing this relates to the problem of determining the risk retention parameters for each policy within the portfolio. In this article, the insurance risk retention problem of determining the optimal retention parameters is explored in a multivariate context. Given an underlying claims distribution and premium constraint, we are interested in finding the optimal amount of risk to retain or, equivalently, which level of risk retention parameters should be chosen by an insurance company. The risk retention parameter may be deductible (d), upper limit (u), or coinsurance (c). We present a numerical approach to solving the risk retention problem using the method of multipliers and illustrate how it can be implemented. In a case study, the minimum amount of premium to be collected is used as a constraint to the optimization and the upper limit is optimized for each policyholder. A Bayesian approach is taken for estimation of the parameters in a simple model involving regional effects and individual policyholder effects for the Wisconsin Local Government Property Insurance Fund (LGPIF) data, where the parameter estimation is performed in the R computing environment using the Stan library.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-01-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49207702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Society of Actuaries and North American Actuarial Journal Announce New Editor","authors":"","doi":"10.1080/10920277.2023.2169533","DOIUrl":"https://doi.org/10.1080/10920277.2023.2169533","url":null,"abstract":"","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49211782","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On a Risk Model With Dual Seasonalities","authors":"Yang Miao, Kristina P. Sendova, B. Jones","doi":"10.1080/10920277.2022.2068611","DOIUrl":"https://doi.org/10.1080/10920277.2022.2068611","url":null,"abstract":"We consider a risk model where both the premium income and the claim process have seasonal fluctuations. We obtain the probability of ruin based on the simulation approach presented in Morales. We also discuss the conditions that must be satisfied for this approach to work. We give both a numerical example that is based on a simulation study and an example using a real-life auto insurance data set. Various properties of this risk model are also discussed and compared with the existing literature.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2023-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43942298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"The Automated Bias-Corrected and Accelerated Bootstrap Confidence Intervals for Risk Measures","authors":"B. Grün, T. Miljkovic","doi":"10.1080/10920277.2022.2141781","DOIUrl":"https://doi.org/10.1080/10920277.2022.2141781","url":null,"abstract":"Different approaches to determining two-sided interval estimators for risk measures such as Value-at-Risk (VaR) and conditional tail expectation (CTE) when modeling loss data exist in the actuarial literature. Two contrasting methods can be distinguished: a nonparametric one not relying on distributional assumptions or a fully parametric one relying on standard asymptotic theory to apply. We complement these approaches and take advantage of currently available computer power to propose the bias-corrected and accelerated (BCA) confidence intervals for VaR and CTE. The BCA confidence intervals allow the use of a parametric model but do not require standard asymptotic theory to apply. We outline the details to determine interval estimators for these three different approaches using general computational tools as well as with analytical formulas when assuming the truncated Lognormal distribution as a parametric model for insurance loss data. An extensive simulation study is performed to assess the performance of the proposed BCA method in comparison to the two alternative methods. A real dataset of left-truncated insurance losses is employed to illustrate the implementation of the BCA-VaR and BCA-CTE interval estimators in practice when using the truncated Lognormal distribution for modeling the loss data.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45276732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Computing and Estimating Distortion Risk Measures: How to Handle Analytically Intractable Cases?","authors":"Sahadeb Upretee, V. Brazauskas","doi":"10.1080/10920277.2022.2137201","DOIUrl":"https://doi.org/10.1080/10920277.2022.2137201","url":null,"abstract":"In insurance data analytics and actuarial practice, distortion risk measures are used to capture the riskiness of the distribution tail. Point and interval estimates of the risk measures are then employed to price extreme events, to develop reserves, to design risk transfer strategies, and to allocate capital. Often the computation of those estimates relies on Monte Carlo simulations, which, depending upon the complexity of the problem, can be very costly in terms of required expertise and computational time. In this article, we study analytic and numerical evaluation of distortion risk measures, with the expectation that the proposed formulas or inequalities will reduce the computational burden. Specifically, we consider several distortion risk measures––value-at-risk (VaR), conditional tail expectation (cte), proportional hazards transform (pht), Wang transform (wt), and Gini shortfall (gs)––and evaluate them when the loss severity variable follows shifted exponential, Pareto I, and shifted lognormal distributions (all chosen to have the same support), which exhibit common distributional shapes of insurance losses. For these choices of risk measures and loss models, only the VaR and cte measures always possess explicit formulas. For pht, wt, and gs, there are cases when the analytic treatment of the measure is not feasible. In the latter situations, conditions under which the measure is finite are studied rigorously. In particular, we prove several theorems that specify two-sided bounds for the analytically intractable cases. The quality of the bounds is further investigated by comparing them with numerically evaluated risk measures. Finally, a simulation study involving application of those bounds in statistical estimation of the risk measures is also provided.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46989295","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
James M. Carson, Evan M. Eastman, David L. Eckles, Joshua D. Frederick
{"title":"Are Internal Capital Markets Ex Post Efficient?","authors":"James M. Carson, Evan M. Eastman, David L. Eckles, Joshua D. Frederick","doi":"10.1080/10920277.2022.2126373","DOIUrl":"https://doi.org/10.1080/10920277.2022.2126373","url":null,"abstract":"Internal capital markets enable conglomerates to allocate capital to segments throughout the enterprise. Prior literature provides evidence that internal capital markets efficiently allocate capital based predominantly on group member prior performance, consistent with the “winner picking” hypothesis. However, existing research has not examined the critical question of how these “winners” perform subsequent to receiving internal capital—that is, do winners keep winning? We extend the literature by providing empirical evidence on whether or not internal capital markets are ex post efficient. We find, in contrast to mean reversion, that winners continue their relatively high performance. Our study contributes to the literature examining the efficiency of internal capital markets and the conglomerate discount, as well as the literature specifically examining capital allocation in financial firms.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48202739","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Conformal Prediction Credibility Intervals","authors":"Liang Hong","doi":"10.1080/10920277.2022.2123364","DOIUrl":"https://doi.org/10.1080/10920277.2022.2123364","url":null,"abstract":"In the predictive modeling context, the credibility estimator is a point predictor; it is easy to calculate and avoids the model misspecification risk asymptotically, but it provides no quantification of inferential uncertainty. A Bayesian prediction interval quantifies uncertainty of prediction, but it often requires expensive computation and is subject to model misspecification risk even asymptotically. Is there a way to get the best of both worlds? Based on a powerful machine learning strategy called conformal prediction, this article proposes a method that converts the credibility estimator into a conformal prediction credibility interval. This conformal prediction credibility interval contains the credibility estimator, has computational simplicity, and guarantees finite-sample validity at a pre-assigned coverage level.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43851493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"An Empirical Assessment of Regulatory Lag in Insurance Rate Filings","authors":"P. Born, J. Bradley Karl, R. Klein","doi":"10.1080/10920277.2022.2123360","DOIUrl":"https://doi.org/10.1080/10920277.2022.2123360","url":null,"abstract":"In this article, we evaluate factors that help to explain an important source of variation in insurers' rate filing experiences across states and over time for personal automobile insurance. Using a new source of data from personal auto insurance rate filings for all U.S. insurers, we examine factors associated with regulatory lag. The timeliness of the disposition of insurers' rate filings is important, as significant delays can undermine the usefulness of the actuarial analysis required for justifying rate changes and may result in rate inadequacy pending the approval of rate increases. While there is a considerable literature on the effect of rate regulation regimes on insurance market outcomes, this is the first article that evaluates factors associated with regulatory lag. We use a principal components approach to explore the relative influence of various factors on the timeliness of filing approval. These factors are associated with (1) industry interest, resources, and influence, (2) demand conditions, complexity, and saliency, (3) the goals of political elites, and (4) the goals and resources of regulators as important drivers of insurers' rate filing experience. We find that state rate filing statutes account for some of the variation in regulatory lag and identify other significant factors that explain the variation in the timeliness of rate approvals across states and over time.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46478713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pengcheng Zhang, E. Calderín-Ojeda, Shuanming Li, Xueyuan Wu
{"title":"Bayesian Multivariate Mixed Poisson Models with Copula-Based Mixture","authors":"Pengcheng Zhang, E. Calderín-Ojeda, Shuanming Li, Xueyuan Wu","doi":"10.1080/10920277.2022.2112233","DOIUrl":"https://doi.org/10.1080/10920277.2022.2112233","url":null,"abstract":"It is common practice to use multivariate count modeling in actuarial literature when dealing with claim counts from insurance policies with multiple covers. One possible way to construct such a model is to implement copula directly on discrete margins. However, likelihood inference under this construction involves the computation of multidimensional rectangle probabilities, which could be computationally expensive, especially in the elliptical copula case. Another potential approach is based on the multivariate mixed Poisson model. The crucial work under this method is to find an appropriate multivariate continuous distribution for mixing parameters. By virtue of the copula, this issue could be easily addressed. Under such a framework, the Markov chain Monte Carlo (MCMC) method is a feasible strategy for inference. The usefulness of our model is then illustrated through a real-life example. The empirical analysis demonstrates the superiority of adopting a copula-based mixture over other types of mixtures. Finally, we demonstrate how those fitted models can be applied to the insurance ratemaking problem in a Bayesian context.","PeriodicalId":46812,"journal":{"name":"North American Actuarial Journal","volume":null,"pages":null},"PeriodicalIF":1.4,"publicationDate":"2022-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42189061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}