{"title":"The Theory-Practice Gap in the Evaluation of Agent-Based Social Simulations.","authors":"David Anzola","doi":"10.1017/S0269889722000242","DOIUrl":null,"url":null,"abstract":"<p><p>Agent-based social simulations have historically been evaluated using two criteria: verification and validation. This article questions the adequacy of this dual evaluation scheme. It claims that the scheme does not conform to everyday practices of evaluation, and has, over time, fostered a theory-practice gap in the assessment of social simulations. This gap originates because the dual evaluation scheme, inherited from computer science and software engineering, on one hand, overemphasizes the technical and formal aspects of the implementation process and, on the other hand, misrepresents the connection between the conceptual and the computational model. The mismatch between evaluation theory and practice, it is suggested, might be overcome if practitioners of agent-based social simulation adopt a single criterion evaluation scheme in which: i) the technical/formal issues of the implementation process are tackled as a matter of debugging or instrument calibration, and ii) the epistemological issues surrounding the connection between conceptual and computational models are addressed as a matter of validation.</p>","PeriodicalId":49562,"journal":{"name":"Science in Context","volume":null,"pages":null},"PeriodicalIF":0.3000,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Science in Context","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1017/S0269889722000242","RegionNum":4,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Arts and Humanities","Score":null,"Total":0}
引用次数: 3
Abstract
Agent-based social simulations have historically been evaluated using two criteria: verification and validation. This article questions the adequacy of this dual evaluation scheme. It claims that the scheme does not conform to everyday practices of evaluation, and has, over time, fostered a theory-practice gap in the assessment of social simulations. This gap originates because the dual evaluation scheme, inherited from computer science and software engineering, on one hand, overemphasizes the technical and formal aspects of the implementation process and, on the other hand, misrepresents the connection between the conceptual and the computational model. The mismatch between evaluation theory and practice, it is suggested, might be overcome if practitioners of agent-based social simulation adopt a single criterion evaluation scheme in which: i) the technical/formal issues of the implementation process are tackled as a matter of debugging or instrument calibration, and ii) the epistemological issues surrounding the connection between conceptual and computational models are addressed as a matter of validation.
期刊介绍:
Science in Context is an international journal edited at The Cohn Institute for the History and Philosophy of Science and Ideas, Tel Aviv University, with the support of the Van Leer Jerusalem Institute. It is devoted to the study of the sciences from the points of view of comparative epistemology and historical sociology of scientific knowledge. The journal is committed to an interdisciplinary approach to the study of science and its cultural development - it does not segregate considerations drawn from history, philosophy and sociology. Controversies within scientific knowledge and debates about methodology are presented in their contexts.