社会核算矩阵平衡与分解中惩罚函数的比较

IF 2.2 Q2 ECONOMICS
W. Britz
{"title":"社会核算矩阵平衡与分解中惩罚函数的比较","authors":"W. Britz","doi":"10.21642/JGEA.060102AF","DOIUrl":null,"url":null,"abstract":"Constructing a balanced and sufficiently detailed Social Accounting Matrix (SAM) is a necessary step for any work with Computable General Equilibrium (CGE) models. Even when starting with a given SAM, researchers might wish to develop their own, more detailed variants for a specific study by dis-aggregating sectors and products, a process termed splitting the SAM. We review three approaches for balancing and splitting a SAM: Cross-Entropy (CE), a Highest Posterior Density (HPD) estimator resulting in a quadratic loss penalty function, and a linear loss penalty function. The exercise considers upper and lower bounds on the (new) SAM entries, different weights for penalizing deviations from a priori information, and unknown row or column totals, to give the user flexibility in controlling outcomes. The approaches are assessed first by a systematic Monte-Carlo experiment. It re-balances smaller SAMs, after errors with known distributions are added. Here we find quite limited numerical differences between the CE and quadratic loss approaches. The CE approach was however considerably slower than the other candidates. Second, we tested the three approaches for dis-aggregating the Global Trade Analysis Project (GTAP) data base to provide, as an example, further agri-food detail. In such empirical applications, the distribution of the errors of the new SAM entries is typically not known. As in the SAM balancing exercise, we use CONOPT4 as a multi-purpose (non)linear solver which can be also be employed to solve the CGE model itself. For comparison, we add the specialized Linear and Quadratic Programming (QP) solvers CPLEDX and GUROBI. As in the Monte-Carlo experiment, the differences in results between the three approaches were moderate. The specialized solvers require very little time to solve the linear and quadratic loss problems. However, they did not achieve the same, very high accuracy as CONOPT4 for the quadratic loss problem. The CE problem could take longer by a factor of 100 or more, compared to a linear or quadratic loss approach solved with the specialized solvers. We conclude that using linear or quadratic loss approaches, especially combined with a specialized solver, are the most suitable candidates for larger SAM splitting / balancing problems. Additionally, we present a fast and accurate data processing chain to yield a benchmark data set for a CGE model from the GTAP Data Base which involves filtering out small cost, expenditure and revenue shares, and allows users to introduce further product and sectoral detail based on user provided information.","PeriodicalId":44607,"journal":{"name":"Journal of Global Economic Analysis","volume":" ","pages":""},"PeriodicalIF":2.2000,"publicationDate":"2021-06-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Comparing Penalty Functions in Balancing and Dis-aggregating Social Accounting Matrices\",\"authors\":\"W. Britz\",\"doi\":\"10.21642/JGEA.060102AF\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Constructing a balanced and sufficiently detailed Social Accounting Matrix (SAM) is a necessary step for any work with Computable General Equilibrium (CGE) models. Even when starting with a given SAM, researchers might wish to develop their own, more detailed variants for a specific study by dis-aggregating sectors and products, a process termed splitting the SAM. We review three approaches for balancing and splitting a SAM: Cross-Entropy (CE), a Highest Posterior Density (HPD) estimator resulting in a quadratic loss penalty function, and a linear loss penalty function. The exercise considers upper and lower bounds on the (new) SAM entries, different weights for penalizing deviations from a priori information, and unknown row or column totals, to give the user flexibility in controlling outcomes. The approaches are assessed first by a systematic Monte-Carlo experiment. It re-balances smaller SAMs, after errors with known distributions are added. Here we find quite limited numerical differences between the CE and quadratic loss approaches. The CE approach was however considerably slower than the other candidates. Second, we tested the three approaches for dis-aggregating the Global Trade Analysis Project (GTAP) data base to provide, as an example, further agri-food detail. In such empirical applications, the distribution of the errors of the new SAM entries is typically not known. As in the SAM balancing exercise, we use CONOPT4 as a multi-purpose (non)linear solver which can be also be employed to solve the CGE model itself. For comparison, we add the specialized Linear and Quadratic Programming (QP) solvers CPLEDX and GUROBI. As in the Monte-Carlo experiment, the differences in results between the three approaches were moderate. The specialized solvers require very little time to solve the linear and quadratic loss problems. However, they did not achieve the same, very high accuracy as CONOPT4 for the quadratic loss problem. The CE problem could take longer by a factor of 100 or more, compared to a linear or quadratic loss approach solved with the specialized solvers. We conclude that using linear or quadratic loss approaches, especially combined with a specialized solver, are the most suitable candidates for larger SAM splitting / balancing problems. Additionally, we present a fast and accurate data processing chain to yield a benchmark data set for a CGE model from the GTAP Data Base which involves filtering out small cost, expenditure and revenue shares, and allows users to introduce further product and sectoral detail based on user provided information.\",\"PeriodicalId\":44607,\"journal\":{\"name\":\"Journal of Global Economic Analysis\",\"volume\":\" \",\"pages\":\"\"},\"PeriodicalIF\":2.2000,\"publicationDate\":\"2021-06-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Global Economic Analysis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.21642/JGEA.060102AF\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"ECONOMICS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Global Economic Analysis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.21642/JGEA.060102AF","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ECONOMICS","Score":null,"Total":0}
引用次数: 2

摘要

构建一个平衡且足够详细的社会核算矩阵(SAM)是任何使用可计算一般均衡(CGE)模型的工作的必要步骤。即使从给定的SAM开始,研究人员也可能希望通过分解部门和产品来为特定研究开发自己的、更详细的变体,这一过程被称为拆分SAM。我们回顾了平衡和拆分SAM的三种方法:交叉熵(CE)、产生二次损失惩罚函数的最高后验密度(HPD)估计器,以及线性损失惩罚函数。该练习考虑了(新)SAM条目的上限和下限、惩罚与先验信息的偏差的不同权重以及未知的行或列总数,以使用户在控制结果方面具有灵活性。这些方法首先通过系统的蒙特卡罗实验进行评估。在添加了已知分布的错误后,它会重新平衡较小的SAM。在这里,我们发现CE和二次损失方法之间的数值差异非常有限。然而,行政长官的做法较其他候选人慢得多。其次,我们测试了三种分解全球贸易分析项目(GTAP)数据库的方法,以提供进一步的农业食品细节为例。在这样的经验应用中,新SAM条目的误差分布通常是未知的。与SAM平衡练习一样,我们使用CONOPT4作为多用途(非线性)求解器,也可用于求解CGE模型本身。为了进行比较,我们添加了专门的线性和二次规划(QP)求解器CPLEDX和GUROBI。与蒙特卡洛实验一样,三种方法之间的结果差异不大。专门的求解器只需要很少的时间来解决线性和二次损失问题。然而,对于二次损失问题,它们并没有达到与CONOPT4相同的、非常高的精度。与使用专业求解器解决的线性或二次损失方法相比,CE问题可能需要更长的时间,达到100倍或更多。我们得出的结论是,使用线性或二次损失方法,特别是与专门的求解器相结合,是更大SAM分裂/平衡问题的最合适候选者。此外,我们提供了一个快速准确的数据处理链,以从GTAP数据库中生成CGE模型的基准数据集,该数据集涉及筛选出较小的成本、支出和收入份额,并允许用户根据用户提供的信息介绍进一步的产品和部门细节。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
Comparing Penalty Functions in Balancing and Dis-aggregating Social Accounting Matrices
Constructing a balanced and sufficiently detailed Social Accounting Matrix (SAM) is a necessary step for any work with Computable General Equilibrium (CGE) models. Even when starting with a given SAM, researchers might wish to develop their own, more detailed variants for a specific study by dis-aggregating sectors and products, a process termed splitting the SAM. We review three approaches for balancing and splitting a SAM: Cross-Entropy (CE), a Highest Posterior Density (HPD) estimator resulting in a quadratic loss penalty function, and a linear loss penalty function. The exercise considers upper and lower bounds on the (new) SAM entries, different weights for penalizing deviations from a priori information, and unknown row or column totals, to give the user flexibility in controlling outcomes. The approaches are assessed first by a systematic Monte-Carlo experiment. It re-balances smaller SAMs, after errors with known distributions are added. Here we find quite limited numerical differences between the CE and quadratic loss approaches. The CE approach was however considerably slower than the other candidates. Second, we tested the three approaches for dis-aggregating the Global Trade Analysis Project (GTAP) data base to provide, as an example, further agri-food detail. In such empirical applications, the distribution of the errors of the new SAM entries is typically not known. As in the SAM balancing exercise, we use CONOPT4 as a multi-purpose (non)linear solver which can be also be employed to solve the CGE model itself. For comparison, we add the specialized Linear and Quadratic Programming (QP) solvers CPLEDX and GUROBI. As in the Monte-Carlo experiment, the differences in results between the three approaches were moderate. The specialized solvers require very little time to solve the linear and quadratic loss problems. However, they did not achieve the same, very high accuracy as CONOPT4 for the quadratic loss problem. The CE problem could take longer by a factor of 100 or more, compared to a linear or quadratic loss approach solved with the specialized solvers. We conclude that using linear or quadratic loss approaches, especially combined with a specialized solver, are the most suitable candidates for larger SAM splitting / balancing problems. Additionally, we present a fast and accurate data processing chain to yield a benchmark data set for a CGE model from the GTAP Data Base which involves filtering out small cost, expenditure and revenue shares, and allows users to introduce further product and sectoral detail based on user provided information.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
5.60
自引率
12.00%
发文量
0
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
确定
请完成安全验证×
copy
已复制链接
快去分享给好友吧!
我知道了
右上角分享
点击右上角分享
0
联系我们:info@booksci.cn Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。 Copyright © 2023 布克学术 All rights reserved.
京ICP备2023020795号-1
ghs 京公网安备 11010802042870号
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术官方微信