Skip to main content
eScholarship
Open Access Publications from the University of California

UCLA

UCLA Electronic Theses and Dissertations bannerUCLA

A Study On The Impact Of Compiler Optimizations On High-Level Synthesis

Abstract

High-level synthesis is a design process which takes an un-timed, behavioral description in a high-level language like C and produces register-transfer-level (RTL) code that implements the same behaviour in hardware. In this design flow, the quality of the generated RTL is greatly influenced by high-level description of the language. Hence it follows that both source-level and IR-level compiler optimizations could either improve or hurt the quality of the generated RTL. The problem of ordering compiler optimization passes, also known as the phase-ordering problem, has been an area of active research over the past decade. An optimization has enabling and disabling effects on other optimizations, and such effects are caused by either the nature of the optimization itself, the input program being optimized, or the target platform for which the code is being optimized. A well-known fact in literature is that the standard optimization order chosen by the compiler writer may not be the best order for every input, and hence can end up producing inferior code.

All methods mentioned above are targeted towards compilers producing code that will be executed on a processor. In this study, we explore the effects of both source-level and IR optimizations on high-level synthesis. The parameters of the generated RTL are very

sensitive to high-level optimizations, in the sense that a right choice can provide significant benefits and a wrong choice can cause significant degradation. We consider three source-level optimizations commonly used in High-level synthesis. We study them in isolation and then propose simple yet effective heuristics to apply them to obtain a reasonable latency-area tradeoff. We also study the phase-ordering problem for IR-level optimizations from a HLS perspective. As many optimizations that are employed in a typical HLS flow were originally developed with a standard compiler in mind, and given the increasing popularity of HLS, we feel that such a study is essential to building high-quality HLS tools. Our initial results show that an input-specific order can achieve significant reduction in the latency of the generated RTL, and opens up this technology for future research.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View