{"title":"数虫子比你想象的要难","authors":"P. Black","doi":"10.1109/SCAM.2011.24","DOIUrl":null,"url":null,"abstract":"Software Assurance Metrics and Tool Evaluation (SAMATE) is a broad, inclusive project at the U.S. National Institute of Standards and Technology (NIST) with the goal of improving software assurance by developing materials, specifications, and methods to test tools and techniques and measure their effectiveness. We review some SAMATE sub-projects: web application security scanners, malware research protocol, electronic voting systems, the SAMATE Reference Dataset, a public repository of thousands of example programs with known weaknesses, and the Static Analysis Tool Exposition (SATE). Along the way we list over two dozen possible research questions, which are also collaboration opportunities. Software metrics are incomplete without metrics of what is variously called bugs, flaws, or faults. We detail numerous critical research problems related to such metrics. For instance, is a warning from a source code scanner a real bug, a false positive, or something else? If a numeric overflow leads to buffer overflow, which leads to command injection, what is the error? How many bugs are there if two sources call two sinks: 1, 2, or 4? Where is a missing feature? We conclude with a list of concepts which may be a useful basis of bug metrics.","PeriodicalId":286433,"journal":{"name":"2011 IEEE 11th International Working Conference on Source Code Analysis and Manipulation","volume":"196 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2011-09-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"15","resultStr":"{\"title\":\"Counting Bugs is Harder Than You Think\",\"authors\":\"P. Black\",\"doi\":\"10.1109/SCAM.2011.24\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Software Assurance Metrics and Tool Evaluation (SAMATE) is a broad, inclusive project at the U.S. National Institute of Standards and Technology (NIST) with the goal of improving software assurance by developing materials, specifications, and methods to test tools and techniques and measure their effectiveness. We review some SAMATE sub-projects: web application security scanners, malware research protocol, electronic voting systems, the SAMATE Reference Dataset, a public repository of thousands of example programs with known weaknesses, and the Static Analysis Tool Exposition (SATE). Along the way we list over two dozen possible research questions, which are also collaboration opportunities. Software metrics are incomplete without metrics of what is variously called bugs, flaws, or faults. We detail numerous critical research problems related to such metrics. For instance, is a warning from a source code scanner a real bug, a false positive, or something else? If a numeric overflow leads to buffer overflow, which leads to command injection, what is the error? How many bugs are there if two sources call two sinks: 1, 2, or 4? Where is a missing feature? We conclude with a list of concepts which may be a useful basis of bug metrics.\",\"PeriodicalId\":286433,\"journal\":{\"name\":\"2011 IEEE 11th International Working Conference on Source Code Analysis and Manipulation\",\"volume\":\"196 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2011-09-25\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"15\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2011 IEEE 11th International Working Conference on Source Code Analysis and Manipulation\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/SCAM.2011.24\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2011 IEEE 11th International Working Conference on Source Code Analysis and Manipulation","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/SCAM.2011.24","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Software Assurance Metrics and Tool Evaluation (SAMATE) is a broad, inclusive project at the U.S. National Institute of Standards and Technology (NIST) with the goal of improving software assurance by developing materials, specifications, and methods to test tools and techniques and measure their effectiveness. We review some SAMATE sub-projects: web application security scanners, malware research protocol, electronic voting systems, the SAMATE Reference Dataset, a public repository of thousands of example programs with known weaknesses, and the Static Analysis Tool Exposition (SATE). Along the way we list over two dozen possible research questions, which are also collaboration opportunities. Software metrics are incomplete without metrics of what is variously called bugs, flaws, or faults. We detail numerous critical research problems related to such metrics. For instance, is a warning from a source code scanner a real bug, a false positive, or something else? If a numeric overflow leads to buffer overflow, which leads to command injection, what is the error? How many bugs are there if two sources call two sinks: 1, 2, or 4? Where is a missing feature? We conclude with a list of concepts which may be a useful basis of bug metrics.