{"title":"编译器优化及其对实时系统开发的影响","authors":"K. Tucker, E. Solomon, K. Littlejohn","doi":"10.1109/DASC.1998.741479","DOIUrl":null,"url":null,"abstract":"Real-time systems are increasing in size and complexity. Software developers seek to maximize the performance, as well as minimize the memory requirements in order to meet the constraints of these systems. Traditionally, software developers have overcome these constraints by using an optimizing compiler. However, optimization carries a hidden price. Many Commercial-Off-The-Shelf (COTS) debuggers are unable to work effectively with optimized applications, because the one-to-one mapping of source to object code is often disturbed by optimization. While optimizing, a compiler is free to reorder or interleave the code for statements, so long as the semantics of the program are preserved. This causes two major problems for a debugger. First, selecting a representative instruction to be used when setting a breakpoint on a source-level statement becomes difficult. Second, at a given breakpoint, the value of a variable may differ from what it would be in the non-optimized version. This condition is known as non-currency. Each of these problems becomes a serious issue if a debugger is used to debug the deliverable code or to perform verification testing. This paper presents an in-depth study of the effects of loop-invariant optimization on debugging user applications. The paper will show how to significantly improve the performance of the debugger, with respect to breakpoints and non-current variables, when operating on code affected by this optimization. A summary of the research done, including specific implementation issues and lessons learned while enhancing the DDC-I SPARCStation Solaris native Ada compiler system, is presented.","PeriodicalId":335827,"journal":{"name":"17th DASC. AIAA/IEEE/SAE. Digital Avionics Systems Conference. Proceedings (Cat. No.98CH36267)","volume":"37 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"1998-10-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":"{\"title\":\"Compiler optimization and its impact on development of real-time systems\",\"authors\":\"K. Tucker, E. Solomon, K. Littlejohn\",\"doi\":\"10.1109/DASC.1998.741479\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Real-time systems are increasing in size and complexity. Software developers seek to maximize the performance, as well as minimize the memory requirements in order to meet the constraints of these systems. Traditionally, software developers have overcome these constraints by using an optimizing compiler. However, optimization carries a hidden price. Many Commercial-Off-The-Shelf (COTS) debuggers are unable to work effectively with optimized applications, because the one-to-one mapping of source to object code is often disturbed by optimization. While optimizing, a compiler is free to reorder or interleave the code for statements, so long as the semantics of the program are preserved. This causes two major problems for a debugger. First, selecting a representative instruction to be used when setting a breakpoint on a source-level statement becomes difficult. Second, at a given breakpoint, the value of a variable may differ from what it would be in the non-optimized version. This condition is known as non-currency. Each of these problems becomes a serious issue if a debugger is used to debug the deliverable code or to perform verification testing. This paper presents an in-depth study of the effects of loop-invariant optimization on debugging user applications. The paper will show how to significantly improve the performance of the debugger, with respect to breakpoints and non-current variables, when operating on code affected by this optimization. A summary of the research done, including specific implementation issues and lessons learned while enhancing the DDC-I SPARCStation Solaris native Ada compiler system, is presented.\",\"PeriodicalId\":335827,\"journal\":{\"name\":\"17th DASC. AIAA/IEEE/SAE. Digital Avionics Systems Conference. Proceedings (Cat. No.98CH36267)\",\"volume\":\"37 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"1998-10-31\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"1\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"17th DASC. AIAA/IEEE/SAE. Digital Avionics Systems Conference. Proceedings (Cat. No.98CH36267)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/DASC.1998.741479\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"17th DASC. AIAA/IEEE/SAE. Digital Avionics Systems Conference. Proceedings (Cat. No.98CH36267)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/DASC.1998.741479","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Compiler optimization and its impact on development of real-time systems
Real-time systems are increasing in size and complexity. Software developers seek to maximize the performance, as well as minimize the memory requirements in order to meet the constraints of these systems. Traditionally, software developers have overcome these constraints by using an optimizing compiler. However, optimization carries a hidden price. Many Commercial-Off-The-Shelf (COTS) debuggers are unable to work effectively with optimized applications, because the one-to-one mapping of source to object code is often disturbed by optimization. While optimizing, a compiler is free to reorder or interleave the code for statements, so long as the semantics of the program are preserved. This causes two major problems for a debugger. First, selecting a representative instruction to be used when setting a breakpoint on a source-level statement becomes difficult. Second, at a given breakpoint, the value of a variable may differ from what it would be in the non-optimized version. This condition is known as non-currency. Each of these problems becomes a serious issue if a debugger is used to debug the deliverable code or to perform verification testing. This paper presents an in-depth study of the effects of loop-invariant optimization on debugging user applications. The paper will show how to significantly improve the performance of the debugger, with respect to breakpoints and non-current variables, when operating on code affected by this optimization. A summary of the research done, including specific implementation issues and lessons learned while enhancing the DDC-I SPARCStation Solaris native Ada compiler system, is presented.