{"title":"Model Input Verification of Large Scale Simulations","authors":"Rumyana Neykova, Derek Groen","doi":"arxiv-2409.05768","DOIUrl":null,"url":null,"abstract":"Reliable simulations are critical for analyzing and understanding complex\nsystems, but their accuracy depends on correct input data. Incorrect inputs\nsuch as invalid or out-of-range values, missing data, and format\ninconsistencies can cause simulation crashes or unnoticed result distortions,\nultimately undermining the validity of the conclusions. This paper presents a\nmethodology for verifying the validity of input data in simulations, a process\nwe term model input verification (MIV). We implement this approach in FabGuard,\na toolset that uses established data schema and validation tools for the\nspecific needs of simulation modeling. We introduce a formalism for\ncategorizing MIV patterns and offer a streamlined verification pipeline that\nintegrates into existing simulation workflows. FabGuard's applicability is\ndemonstrated across three diverse domains: conflict-driven migration, disaster\nevacuation, and disease spread models. We also explore the use of Large\nLanguage Models (LLMs) for automating constraint generation and inference. In a\ncase study with a migration simulation, LLMs not only correctly inferred 22 out\nof 23 developer-defined constraints, but also identified errors in existing\nconstraints and proposed new, valid constraints. Our evaluation demonstrates\nthat MIV is feasible on large datasets, with FabGuard efficiently processing\n12,000 input files in 140 seconds and maintaining consistent performance across\nvarying file sizes.","PeriodicalId":501422,"journal":{"name":"arXiv - CS - Distributed, Parallel, and Cluster Computing","volume":"15 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Distributed, Parallel, and Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.05768","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Reliable simulations are critical for analyzing and understanding complex
systems, but their accuracy depends on correct input data. Incorrect inputs
such as invalid or out-of-range values, missing data, and format
inconsistencies can cause simulation crashes or unnoticed result distortions,
ultimately undermining the validity of the conclusions. This paper presents a
methodology for verifying the validity of input data in simulations, a process
we term model input verification (MIV). We implement this approach in FabGuard,
a toolset that uses established data schema and validation tools for the
specific needs of simulation modeling. We introduce a formalism for
categorizing MIV patterns and offer a streamlined verification pipeline that
integrates into existing simulation workflows. FabGuard's applicability is
demonstrated across three diverse domains: conflict-driven migration, disaster
evacuation, and disease spread models. We also explore the use of Large
Language Models (LLMs) for automating constraint generation and inference. In a
case study with a migration simulation, LLMs not only correctly inferred 22 out
of 23 developer-defined constraints, but also identified errors in existing
constraints and proposed new, valid constraints. Our evaluation demonstrates
that MIV is feasible on large datasets, with FabGuard efficiently processing
12,000 input files in 140 seconds and maintaining consistent performance across
varying file sizes.