Inès Winandy;Arnaud Dion;Florent Manni;Pierre-Loïc Garoche;Dorra Ben Khalifa;Matthieu Martel
{"title":"Automated Fixed-Point Precision Optimization for FPGA Synthesis","authors":"Inès Winandy;Arnaud Dion;Florent Manni;Pierre-Loïc Garoche;Dorra Ben Khalifa;Matthieu Martel","doi":"10.1109/OJCAS.2025.3580744","DOIUrl":null,"url":null,"abstract":"Precision tuning of fixed-point arithmetic is a powerful technique for optimizing hardware designs on, where computing resources and memory are often severely constrained. While fixed-point arithmetic offers significant performance and area advantages over floating-point implementations, deriving an appropriate fixed-point representation remains a challenging task. In particular, developers must carefully select the number of bits assigned to the integer and fractional parts of each variable to balance accuracy and resource consumption. In this article, we introduce an original precision tuning technique for synthesizing fixed-point programs from floating-point code, specifically targeting platforms. The distinguishing feature of our technique lies in its formal approach to error analysis: it systematically propagates numerical errors through computations to infer variable-specific fixed-point formats that guarantee user-specified accuracy bounds. Unlike heuristic or ad-hoc methods, our technique provides formal guarantees on the final accuracy of the generated code, ensuring safe deployment on hardware platforms. To enable hardware-friendly implementations, the resulting fixed-point programs use the ap_fixed data types provided by High Level Synthesis (HLS) tools, allowing fine-grained control over the precision of each variable. Our method has been implemented within the <sc>POPiX 2.0</small> framework, which automatically generates optimized fixed-point code ready for synthesis. Experimental results on a set of embedded benchmarks show that our fixed-point codes use predominantly fewer machine cycles than floating-point codes when compiled on an with the state-of-the-art HLS compiler by AMD. Also, our generated fixed-point codes reduce hardware resource usage, such as LUTs, flip-flops, and DSP blocks, with typical reductions ranging from 67% to 83% compared to double precision floating-point codes, depending on the application.","PeriodicalId":93442,"journal":{"name":"IEEE open journal of circuits and systems","volume":"6 ","pages":"192-204"},"PeriodicalIF":2.4000,"publicationDate":"2025-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11039693","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE open journal of circuits and systems","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/11039693/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0
Abstract
Precision tuning of fixed-point arithmetic is a powerful technique for optimizing hardware designs on, where computing resources and memory are often severely constrained. While fixed-point arithmetic offers significant performance and area advantages over floating-point implementations, deriving an appropriate fixed-point representation remains a challenging task. In particular, developers must carefully select the number of bits assigned to the integer and fractional parts of each variable to balance accuracy and resource consumption. In this article, we introduce an original precision tuning technique for synthesizing fixed-point programs from floating-point code, specifically targeting platforms. The distinguishing feature of our technique lies in its formal approach to error analysis: it systematically propagates numerical errors through computations to infer variable-specific fixed-point formats that guarantee user-specified accuracy bounds. Unlike heuristic or ad-hoc methods, our technique provides formal guarantees on the final accuracy of the generated code, ensuring safe deployment on hardware platforms. To enable hardware-friendly implementations, the resulting fixed-point programs use the ap_fixed data types provided by High Level Synthesis (HLS) tools, allowing fine-grained control over the precision of each variable. Our method has been implemented within the POPiX 2.0 framework, which automatically generates optimized fixed-point code ready for synthesis. Experimental results on a set of embedded benchmarks show that our fixed-point codes use predominantly fewer machine cycles than floating-point codes when compiled on an with the state-of-the-art HLS compiler by AMD. Also, our generated fixed-point codes reduce hardware resource usage, such as LUTs, flip-flops, and DSP blocks, with typical reductions ranging from 67% to 83% compared to double precision floating-point codes, depending on the application.