J. Alves-Foss, Aditi Pokharel, Ronisha Shigdel, Jia Song
{"title":"校准网络安全实验:评估模糊基准的覆盖分析","authors":"J. Alves-Foss, Aditi Pokharel, Ronisha Shigdel, Jia Song","doi":"10.1109/SERA57763.2023.10197736","DOIUrl":null,"url":null,"abstract":"Computer science experimentation, whether it be for safety, reliability or cybersecurity, is an important part of scientific advancement. Evaluation of relative merits of various experiments typically requires well-calibrated benchmarks that can be used to measure the experimental results. This paper reviews current trends in using benchmarks in fuzzing experimental research for cybersecurity, specifically with metrics related to coverage analysis. Strengths and weaknesses of the current techniques are evaluated and suggestions for improving the current approaches are proposed. The end goal is to convince researchers that benchmarks for experimentation must be well documented, archived and calibrated so that the community knows how well the tools and techniques perform with respect to the possible maximum in the benchmark.","PeriodicalId":211080,"journal":{"name":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-05-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Calibrating Cybersecurity Experiments: Evaluating Coverage Analysis for Fuzzing Benchmarks\",\"authors\":\"J. Alves-Foss, Aditi Pokharel, Ronisha Shigdel, Jia Song\",\"doi\":\"10.1109/SERA57763.2023.10197736\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Computer science experimentation, whether it be for safety, reliability or cybersecurity, is an important part of scientific advancement. Evaluation of relative merits of various experiments typically requires well-calibrated benchmarks that can be used to measure the experimental results. This paper reviews current trends in using benchmarks in fuzzing experimental research for cybersecurity, specifically with metrics related to coverage analysis. Strengths and weaknesses of the current techniques are evaluated and suggestions for improving the current approaches are proposed. The end goal is to convince researchers that benchmarks for experimentation must be well documented, archived and calibrated so that the community knows how well the tools and techniques perform with respect to the possible maximum in the benchmark.\",\"PeriodicalId\":211080,\"journal\":{\"name\":\"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)\",\"volume\":\"27 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-05-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SERA57763.2023.10197736\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2023 IEEE/ACIS 21st International Conference on Software Engineering Research, Management and Applications (SERA)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SERA57763.2023.10197736","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Calibrating Cybersecurity Experiments: Evaluating Coverage Analysis for Fuzzing Benchmarks
Computer science experimentation, whether it be for safety, reliability or cybersecurity, is an important part of scientific advancement. Evaluation of relative merits of various experiments typically requires well-calibrated benchmarks that can be used to measure the experimental results. This paper reviews current trends in using benchmarks in fuzzing experimental research for cybersecurity, specifically with metrics related to coverage analysis. Strengths and weaknesses of the current techniques are evaluated and suggestions for improving the current approaches are proposed. The end goal is to convince researchers that benchmarks for experimentation must be well documented, archived and calibrated so that the community knows how well the tools and techniques perform with respect to the possible maximum in the benchmark.