G. Ernst, Paolo Arcaini, Ismail Bennani, Alexandre Donzé, Georgios Fainekos, G. Frehse, L. Mathesen, C. Menghi, Giulia Pedrielli, M. Pouzet, Shakiba Yaghoubi, Yoriyuki Yamagata, Zhenya Zhang
{"title":"ARCH-COMP 2020 Category Report: Falsification","authors":"G. Ernst, Paolo Arcaini, Ismail Bennani, Alexandre Donzé, Georgios Fainekos, G. Frehse, L. Mathesen, C. Menghi, Giulia Pedrielli, M. Pouzet, Shakiba Yaghoubi, Yoriyuki Yamagata, Zhenya Zhang","doi":"10.29007/trr1","DOIUrl":null,"url":null,"abstract":"This report presents the results from the 2020 friendly competition in the ARCH workshop for the falsification of temporal logic specifications over Cyber-Physical Systems. We briefly describe the competition settings, which have been inherited from the previous year, give background on the participating teams and tools and discuss the selected benchmarks. The benchmarks are available on the ARCH website, as well as in the competition’s gitlab repository. In comparison to 2019, we have two new participating tools with novel approaches, and the results show a clear improvement over previous performances on some benchmarks.","PeriodicalId":82938,"journal":{"name":"The Archivist","volume":"1 1","pages":"140-152"},"PeriodicalIF":0.0000,"publicationDate":"2020-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"29","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"The Archivist","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.29007/trr1","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 29
Abstract
This report presents the results from the 2020 friendly competition in the ARCH workshop for the falsification of temporal logic specifications over Cyber-Physical Systems. We briefly describe the competition settings, which have been inherited from the previous year, give background on the participating teams and tools and discuss the selected benchmarks. The benchmarks are available on the ARCH website, as well as in the competition’s gitlab repository. In comparison to 2019, we have two new participating tools with novel approaches, and the results show a clear improvement over previous performances on some benchmarks.