{"title":"用Genevieve生成半正式测试","authors":"J. Dushina, M. Benjamin, D. Geist","doi":"10.1145/378239.379035","DOIUrl":null,"url":null,"abstract":"This paper describes the first application of the Genevieve test generation methodology. The Genevieve approach uses semi-formal techniques derived from \"model-checking\" to generate test suites for specific behaviours of the design under test. An \"interesting\" behaviour is claimed to be unreachable. If a path from an initial state to the state of interest does exist, a counter-example is generated. The sequence of states specifies a test for the desired behaviour. To highlight real problems that could appear during test generation, we chose the Store Data Unit (SDU) of the ST100, a new high performance digital signal processor (DSP) developed by STMicroelectronics. This unit is specifically selected because of the following key issues: 1. big data structures that can not be directly modelled without state explosion, 2. complex control logic that would require an excessive number of tests to exercise exhaustively, 3. a design where it is difficult to determine how to drive the complete system to ensure a given behaviour in the unit under test. The Genevieve methodology allowed us to define a coverage model specifically devoted to covering corner cases of the design. Hence the generated test suite achieved very efficient coverage of corner cases, and checked not only functional correctness but also whether the implementation matched design intent. As a result the Genevieve tests discovered some subtle performance bugs which would otherwise be very difficult to find.","PeriodicalId":154316,"journal":{"name":"Proceedings of the 38th Design Automation Conference (IEEE Cat. No.01CH37232)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2001-06-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"16","resultStr":"{\"title\":\"Semi-formal test generation with Genevieve\",\"authors\":\"J. Dushina, M. Benjamin, D. Geist\",\"doi\":\"10.1145/378239.379035\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"This paper describes the first application of the Genevieve test generation methodology. The Genevieve approach uses semi-formal techniques derived from \\\"model-checking\\\" to generate test suites for specific behaviours of the design under test. An \\\"interesting\\\" behaviour is claimed to be unreachable. If a path from an initial state to the state of interest does exist, a counter-example is generated. The sequence of states specifies a test for the desired behaviour. To highlight real problems that could appear during test generation, we chose the Store Data Unit (SDU) of the ST100, a new high performance digital signal processor (DSP) developed by STMicroelectronics. This unit is specifically selected because of the following key issues: 1. big data structures that can not be directly modelled without state explosion, 2. complex control logic that would require an excessive number of tests to exercise exhaustively, 3. a design where it is difficult to determine how to drive the complete system to ensure a given behaviour in the unit under test. The Genevieve methodology allowed us to define a coverage model specifically devoted to covering corner cases of the design. Hence the generated test suite achieved very efficient coverage of corner cases, and checked not only functional correctness but also whether the implementation matched design intent. As a result the Genevieve tests discovered some subtle performance bugs which would otherwise be very difficult to find.\",\"PeriodicalId\":154316,\"journal\":{\"name\":\"Proceedings of the 38th Design Automation Conference (IEEE Cat. No.01CH37232)\",\"volume\":\"15 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2001-06-22\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"16\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the 38th Design Automation Conference (IEEE Cat. No.01CH37232)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/378239.379035\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 38th Design Automation Conference (IEEE Cat. No.01CH37232)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/378239.379035","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
This paper describes the first application of the Genevieve test generation methodology. The Genevieve approach uses semi-formal techniques derived from "model-checking" to generate test suites for specific behaviours of the design under test. An "interesting" behaviour is claimed to be unreachable. If a path from an initial state to the state of interest does exist, a counter-example is generated. The sequence of states specifies a test for the desired behaviour. To highlight real problems that could appear during test generation, we chose the Store Data Unit (SDU) of the ST100, a new high performance digital signal processor (DSP) developed by STMicroelectronics. This unit is specifically selected because of the following key issues: 1. big data structures that can not be directly modelled without state explosion, 2. complex control logic that would require an excessive number of tests to exercise exhaustively, 3. a design where it is difficult to determine how to drive the complete system to ensure a given behaviour in the unit under test. The Genevieve methodology allowed us to define a coverage model specifically devoted to covering corner cases of the design. Hence the generated test suite achieved very efficient coverage of corner cases, and checked not only functional correctness but also whether the implementation matched design intent. As a result the Genevieve tests discovered some subtle performance bugs which would otherwise be very difficult to find.