{"title":"A preliminary study on data allocation of on-chip dual memory banks","authors":"Jeonghun Cho, Jinhwan Kim, Y. Paek","doi":"10.1109/INTERA.2002.995844","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995844","url":null,"abstract":"Efficient utilization of memory space is extremely important in embedded applications. Many DSP vendors provide a dual memory bank system that allows the applications to access two memory banks simultaneously. Unfortunately, we have found that existing vendor-provided compilers cannot generate highly efficient code for dual memory space because current compiler technology is unable to fully exploit this DSP hardware feature. Thus, software developers for an embedded processor have hard time developing software by hand in assembly to exploit the hardware feature efficiently. In this paper, we present a preliminary study of a memory allocation technique for dual memory space. Through there has been some work done for dual memory banks, efficient code was generated but it required so long compilation time. Although the compilation speed is relatively of less importance for embedded processors, it still should have a reasonable upper bound particularly for industry compilers due to ever increasing demands on faster time-to-market embedded software design and implementation. To achieve such reasonable compilation speed, we simplified the dual memory bank allocation problem by decoupling our code generation into five phases: register class allocation, code compaction, memory bank assignment, register assignment and memory offset assignment. The experimental results show that our generated codes perform as good as previous work, yet reducing the compilation time dramatically.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117353114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
R. D. Weldon, Steven S. Chang, Hong Wang, Gerolf Hoflehner, P. Wang, Daniel M. Lavery, John Paul Shen
{"title":"Quantitative evaluation of the register stack engine and optimizations for future Itanium processors","authors":"R. D. Weldon, Steven S. Chang, Hong Wang, Gerolf Hoflehner, P. Wang, Daniel M. Lavery, John Paul Shen","doi":"10.1109/INTERA.2002.995843","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995843","url":null,"abstract":"This paper examines the efficiency of the register stack engine (RSE) in the canonical Itanium architecture, and introduces novel optimization techniques to enhance the RSE performance. To minimize spills and fills of the physical register file, optimizations are applied to reduce internal fragmentation in statically allocated register stack frames. Through the use of dynamic register usage (DRU) and dead register value information (DVI), the processor can dynamically guide allocation and deallocation of register frames. Consequently, a speculatively allocated register frame with a dynamically determined frame size can be much smaller than the statically determined frame size, thus achieving minimum spills and fills. Using the register stack engine (RSE) in the canonical Itanium architecture as the baseline reference, we thoroughly study and gauge the tradeoffs of the RSE and the proposed optimizations using a set of SPEC CPU2000 benchmarks built with different compiler optimizations. A combination of frame allocation policies using the most frequent frame size and deallocation policies using dead register information proves to be highly effective. On average, a 71% reduction in aggregate spills and fills can be achieved over the baseline reference.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124331640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Code cache management schemes for dynamic optimizers","authors":"K. Hazelwood, Michael D. Smith","doi":"10.1109/INTERA.2002.995847","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995847","url":null,"abstract":"A dynamic optimizer is a software-based system that performs code modifications at runtime, and several such systems have been proposed over the past several years. These systems typically perform optimization on the level of an instruction trace, and most use caching mechanisms to store recently optimized portions of code. Since the dynamic optimizers produce variable-length code traces that are modified copies of portions of the original executable, a code cache management scheme must deal with the difficult problem of caching objects that vary in size and cannot be subdivided without adding extra jump instructions. Because of these constraints, many dynamic optimizers have chosen unsophisticated schemes, such as flushing the entire cache when it becomes full. Flushing minimizes the overhead of cache management but tends to discard many useful traces. This paper evaluates several alternative cache management schemes that identify and remove only enough traces to make room for a new trace. We find that by treating the code cache as a circular buffer, we can reduce the code cache miss rate by half of that achieved by flushing. Furthermore, this approach adds very little bookkeeping overhead and avoids the problems associated with code cache fragmentation. These characteristics are extremely important in a dynamic system since more complex strategies will do more harm than good if the overhead is too high.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128130418","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Code compression by register operand dependency","authors":"K. Lin, J. Shann, C. Chung","doi":"10.1109/INTERA.2002.995846","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995846","url":null,"abstract":"This paper proposes a dictionary-based code compression technique that maps the source register operands to the nearest occurrence of a destination register in the predecessor instructions. The key idea is that most destination registers have great potential to be used as source registers in the following instructions. The dependent registers can be removed from the dictionary if this information can be specified otherwise. As a result, the compression ratio benefits from the decreased dictionary size. A set of programs has been compressed using this feature. The compression results show that the average compression ratio is reduced to 38.6% on average for MediaBench benchmarks compiled for MIPS R2000 processor.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"101 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124127861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Accuracy of profile maintenance in optimizing compilers","authors":"Youfeng Wu","doi":"10.1109/INTERA.2002.995840","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995840","url":null,"abstract":"Modern processors rely heavily on optimizing compilers to deliver their performance potentials. The compilers, in turn, rely greatly on profile information to focus the optimization efforts and better match the generated code with the target machines. Maintaining the profile in an optimizing compiler is important as many optimizations can benefit from profile information and they are often performed one after the other. Maintaining a profile is, however, tedious and error prone. An erroneous profile is not easy to detect as it affects only the performance, not the correctness, of a program. Maintaining a profile also inherently loses accuracy, as the profile update operations often have to use probabilistic approximation. In this paper, we measure the accuracy of maintaining CFG profiles in a high-performance optimizing compiler. Our data indicates that the compiler maintains the profile more accurately within individual functions than globally across functions, and function inlining may be responsible for the loss of profile accuracy globally. We also identify a number of research issues related to profile maintenance.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131129308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"On the predictability of program behavior using different input data sets","authors":"W. Hsu, Howard Chen, P. Yew, Dong-yuan Chen","doi":"10.1109/INTERA.2002.995842","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995842","url":null,"abstract":"Smaller input data sets such as the test and the train input sets are commonly used in simulation to estimate the impact of architecture/micro-architecture features on the performance of SPEC benchmarks. They are also used for profile feedback compiler optimizations. In this paper, we examine the reliability of reduced input sets for performance simulation and profile feedback optimizations. We study the high level metrics such as IPC and procedure level profiles as well as lower level measurements such as execution paths exercised by various input sets on the SPEC2000int benchmark. Our study indicates that the test input sets are not suitable to be used for simulation because they do not have an execution profile similar to the reference input runs. The train data set is better than the test data sets at maintaining similar profiles to the reference input set. However, the observed execution paths leading to cache misses are very different between using the smaller input sets and the reference input sets. For current profile based optimizations, the differences in quality of profiles may not have a significant impact on performance, as tested on the Itanium processor with an Intel compiler. However, we believe the impact of profile quality will be greater for more aggressive profile guided optimizations, such as cache prefetching.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"138 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132090270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Code size efficiency in global scheduling for ILP processors","authors":"Huiyang Zhou, T. Conte","doi":"10.1109/INTERA.2002.995845","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995845","url":null,"abstract":"In global scheduling for ILP processors, region-enlarging optimizations, especially tail duplication, are commonly used. The code size increase due to such optimizations, however, raises serious concerns about the affected I-cache and TLB performance. In this paper, we propose a quantitative measure of the code size efficiency at compile time for any code size related optimization. Then, based on the efficiency of tail duplication, we propose the solutions to two related problems: (1) how to achieve the best performance for a given code size increase, (2) how to get the optimal code size efficiency for any program. Our study shows that code size increase has a significant but varying impact on IPC, e.g., the first 2% code size increase results in 18.5% increase in static IPC, but less than 1% when the given code size further increases from 20% to 30%. We then use this feature to define the optimal code size efficiency and to derive a simple, yet robust threshold scheme finding it. The experimental results using SPECint95 benchmarks show that this threshold scheme finds the optimal efficiency accurately. While the optimal efficiency results show an average increase of 2% in code size, the improved I-cache performance is observed and a speedup of 17% over the natural treegion results is achieved.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127869510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Mastering startup costs in assembler-based compiled instruction-set simulation","authors":"Ronan Amicel, F. Bodin","doi":"10.1109/INTERA.2002.995841","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995841","url":null,"abstract":"The increasing size and complexity of embedded software requires extremely fast instruction-set simulation. Compiled instruction-set simulation can provide high simulation speed, but the cost of generating and compiling the simulator can be a problem. We claim that efficient compiled instruction-set simulation with small startup costs is possible, using our assembler-level approach. We present ABSCISS, a retargetable and flexible system that generates optimized compiled simulators from assembler programs. Experimental results show the produced simulators to be significantly faster than interpretive simulators, and also show that our assembler-based approach allows to master the simulator generation and compilation times.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117097841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Dynamically scheduling VLIW instructions with dependency information","authors":"Sunghyun Jee, K. Palaniappan","doi":"10.1109/INTERA.2002.995839","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995839","url":null,"abstract":"The paper proposes balancing scheduling effort more evenly between the compiler and the processor, by introducing dynamically scheduled Very Long Instruction Word (VLIW) instructions. Dynamically Instruction Scheduled VLIW (DISVLIW) processor is aimed specifically at dynamic scheduling VLIW instructions with dependency information. The DISVLIW processor dynamically schedules each instruction within long instructions using functional unit and dynamic scheduler pairs. Every dynamic scheduler dynamically checks for data dependencies and resource collisions while scheduling each instruction. This scheduling is especially effective in applications containing loops. We simulate the architecture and show that the DISVLIW processor performs significantly better than the VLIW processor for a wide range of cache sizes and across various numerical benchmark applications.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"4 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2002-02-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126576190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Compiling for fine-grain concurrency: planning and performing software thread integration","authors":"A. Dean","doi":"10.1109/INTERA.2002.995838","DOIUrl":"https://doi.org/10.1109/INTERA.2002.995838","url":null,"abstract":"Embedded systems require control of many concurrent real-time activities, leading to system designs which feature multiple hardware peripherals with each providing a specific, dedicated service. These peripherals increase system size, cost, weight, power and design time. Software thread integration (STI) provides low-cost thread concurrency on general-purpose processors by automatically interleaving multiple (potentially real-time) threads of control into one. This simplifies hardware to software migration (which eliminates dedicated hardware) and can help embedded system designers meet design constraints such as size, weight, power and cost. The paper introduces automated methods for planning and performing the code transformations needed for integration of functions with more sophisticated control flows than in previous work. We demonstrate the methods by using Thrint, our post pass thread-integrating compiler, to automatically integrate multiple threads for a sample real-time embedded system with fine-grain concurrency. Previous work in thread integration required users to manually integrate loops; this is now performed automatically. The sample application generates an NTSC monochrome video signal (sending out a stream of pixels to a video DAC) with STI to replace a video refresh controller IC. Using Thrint reduces integration time from days to minutes and reclaims up to 99% of the system's fine grain idle time.","PeriodicalId":224706,"journal":{"name":"Proceedings Sixth Annual Workshop on Interaction between Compilers and Computer Architectures","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133583956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}