Abstract
Tightly Coupled Processor Arrays (TCPAs), a class of massively parallel loop accelerators, allow applications to offload computationally expensive loops for improved performance and energy efficiency. To achieve these two goals, executing a loop on a TCPA requires an efficient generation of specific programs as well as other configuration data for each distinct combination of loop bounds and number of available processing elements (PEs). Since both these parameters are generally unknown at compile time—the number of available PEs due to dynamic resource management, and the loop bounds, because they depend on the problem size—both the programs and configuration data must be generated at runtime. However, pure just-in-time compilation is impractical, because mapping a loop program onto a TCPA entails solving multiple NP-complete problems.
As a solution, this article proposes a unique mixed static/dynamic approach called symbolic loop compilation. It is shown that at compile time, the NP-complete problems (modulo scheduling, register allocation, and routing) can still be solved to optimality in a symbolic way resulting in a so-called symbolic configuration, a space-efficient intermediate representation parameterized in the loop bounds and number of PEs. This phase is called symbolic mapping. At runtime, for each requested accelerated execution of a loop program with given loop bounds and known number of available PEs, a concrete configuration, including PE programs and configuration data for all other components, is generated from the symbolic configuration according to these parameter values. This phase is called instantiation. We describe both phases in detail and show that instantiation runs in polynomial time with its most complex step, program instantiation, not directly depending on the number of PEs and thus scaling to arbitrary sizes of TCPAs.
To validate the efficiency of this mixed static/dynamic compilation approach, we apply symbolic loop compilation to a set of real-world loop programs from several domains, measuring both compilation time and space requirements. Our experiments confirm that a symbolic configuration is a space-efficient representation suited for systems with little memory—in many cases, a symbolic configuration is smaller than even a single concrete configuration instantiated from it—and that the times for the runtime phase of program instantiation and configuration loading are negligible and moreover independent of the size of the available processor array. To give an example, instantiating a configuration for a matrix-matrix multiplication benchmark takes equally long for 4× 4 and 32× 32 PEs.
- Brent Bohnenstiehl, Aaron Stillmaker, Jon Pimentel, Timothy Andreas, Bin Liu, Anh Tran, Emmanuel Adeagbo, and Bevan Baas. 2017. KiloCore: A fine-grained 1,000-processor array for task parallel applications. IEEE Micro 37, 2 (3 2017), 63–69.Google Scholar
- Srinivas Boppu. 2015. Code Generation for Tightly Coupled Processor Arrays. Ph.D. Dissertation. Friedrich-Alexander-Universität Erlangen-Nürnberg.Google Scholar
- Srinivas Boppu, Frank Hannig, and Jürgen Teich. 2013. Loop program mapping and compact code generation for programmable hardware accelerators. In Proceedings of the IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors. 10–17.Google ScholarDigital Library
- Srinivas Boppu, Frank Hannig, and Jürgen Teich. 2014. Compact code generation for tightly-coupled processor arrays. J. Sign. Process. Syst. 77, 1–2 (2014), 5–29.Google ScholarDigital Library
- Marcel Brand, Frank Hannig, Alexandru Tanase, and Jürgen Teich. 2017. Orthogonal instruction processing: An alternative to lightweight VLIW processors. In Proceedings of the IEEE 11th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC’17). IEEE, 5–12.Google ScholarCross Ref
- Juan Manuel Martinez Caamaño, Willy Wolff, and Philippe Clauss. 2016. Code bones: Fast and flexible code generation for dynamic and speculative polyhedral optimization. In Proceedings of the 22nd European Conference on Parallel Processing (Euro-Par’16). Springer, 225–237.Google ScholarDigital Library
- Gregory Chaitin, Marc Auslander, Ashok Chandra, John Cocke, Martin Hopkins, and Peter Markstein. 1981. Register allocation via coloring. Comput. Lang. 6, 1 (1981), 47–57.Google ScholarDigital Library
- Benoît Dupont De Dinechin, Renaud Ayrignac, Pierre-Edouard Beaucamps, Patrice Couvert, Benoît Ganne, Pierre Guironnet de Massas, François Jacquet, Samuel Jones, Nicolas Morey Chaisemartin, Frédéric Riss, et al. 2013. A clustered manycore processor architecture for embedded and accelerated applications. In Proceedings of the 2013 IEEE High Performance Extreme Computing Conference (HPEC’13). IEEE, 1–6.Google ScholarCross Ref
- Paul Feautrier. 1991. Dataflow analysis of array and scalar references. Int. J. Parallel Program. 20, 1 (1991), pages 23–53.Google ScholarCross Ref
- Frank Hannig. 2009. Scheduling Techniques for High-Throughput Loop Accelerators. Dissertation. University of Erlangen-Nürnberg, Germany.Google Scholar
- Frank Hannig, Vahid Lari, Srinivas Boppu, Alexandru Tanase, and Oliver Reiche. 2014. Invasive tightly-coupled processor arrays: A domain-specific architecture/compiler co-design approach. ACM Trans. Embed. Comput. Syst. 13, 4s (2014), pages 1–29.Google ScholarDigital Library
- Albert Hartono, Muthu Manikandan Baskaran, J. Ramanujam, and P. Sadayappan. 2010. DynTile: Parametric tiled loop generation for parallel execution on multicore processors. In Proceedings of the IEEE International Symposium on Parallel Distributed Processing (IPDPS’10). IEEE, pages 1–12.Google Scholar
- Alexandra Jimborean, Philippe Clauss, Jean-François Dollinger, Vincent Loechner, and Juan Manuel Martinez Caamaño. 2014. Dynamic and speculative polyhedral parallelization using compiler-generated skeletons. Int. J. Parallel Program. 42, 4 (Aug. 2014), pages 529–545.Google ScholarDigital Library
- M. Karunaratne, A. K. Mohite, T. Mitra, and L. Peh. 2017. HyCUBE: A CGRA with reconfigurable single-cycle multi-hop interconnect. In Proceedings of the 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC’17). 1–6. DOI:https://doi.org/10.1145/3061639.3062262Google Scholar
- DaeGon Kim and Sanjay Rajopadhye. 2009. Efficient tiled loop generation: D-tiling. In Languages and Compilers for Parallel Computing. Springer, pages 293–307.Google Scholar
- Dmitrij Kissler, Frank Hannig, Alexey Kupriyanov, and Jürgen Teich. 2006. A dynamically reconfigurable weakly programmable processor array architecture template. In Proceedings of the 2nd International Workshop on Reconfigurable Communication Centric System-on-Chips (ReCoSoC’06). IEEE, 31–37.Google Scholar
- Martin Kong, Richard Veras, Kevin Stock, Franz Franchetti, Louis-Noël Pouchet, and P. Sadayappan. 2013. When polyhedral transformations meet SIMD code generation. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’13). ACM, 127–138.Google Scholar
- Athanasios Konstantinidis, Paul Kelly, J. Ramanujam, and Ponnuswamy Sadayappan. 2014. Parametric GPU code generation for affine loop programs. In Languages and Compilers for Parallel Computing, Vol. 8664. Springer, 136–151.Google Scholar
- Harry Nelis and Ed Deprettere. 1988. Automatic design and partitioning of systolic/wavefront arrays for VLSI. Circ. Syst. Sign. Process. 7, 2 (1988), 235–252.Google ScholarCross Ref
- Bantwal Ramakrishna Rau and Christopher Glaeser. 1981. Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing. In Proceedings of the 14th Annual Workshop on Microprogramming. IEEE, pages 183–198.Google ScholarDigital Library
- Jürgen Teich. 1993. A Compiler for Application Specific Processor Arrays. Shaker, Aachen, Germany.Google Scholar
- Jürgen Teich, Alexandru Tanase, and Frank Hannig. 2013. Symbolic parallelization of loop programs for massively parallel processor arrays. In Proceedings of the IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors (ASAP’13). IEEE, 1–9.Google ScholarDigital Library
- Jürgen Teich, Alexandru Tanase, and Frank Hannig. 2014. Symbolic mapping of loop programs onto processor arrays. J. Sign. Process. Syst. 77, 1 (2014), 31–59.Google ScholarDigital Library
- Jürgen Teich and Lothar Thiele. 1991. Control generation in the design of processor arrays. J. VLSI Sign. Process. Syst. Sign. Image Vid. Technol. 3, 1–2 (1991), 77–92.Google Scholar
- Jürgen Teich, Lothar Thiele, and Li Zhang. 1996. Scheduling of partitioned regular algorithms on processor arrays with constrained resources. In Proceedings of the International Conference on Application-Specific Systems, Architectures, and Processors (ASAP’96). 131–144.Google ScholarDigital Library
- Lothar Thiele. 1989. On the design of piecewise regular processor arrays. In Proceedings of the IEEE International Symposium on Circuits and Systems. IEEE, pages 2239–2242.Google ScholarCross Ref
- Lothar Thiele and Vwani Roychowdhury. 1991. Systematic design of local processor arrays for numerical algorithms. Algorithms and Parallel VLSI Architectures Vol. A: Tutorials (1991), 329–339.Google Scholar
- Sven Verdoolaege. 2010. isl: An integer set library for the polyhedral model. In Proceedings of the International Congress on Mathematical Software. Springer, pages 299–302.Google ScholarCross Ref
- Mark Wijtvliet, Luc Waeijen, and Henk Corporaal. 2016. Coarse grained reconfigurable architectures in the past 25 years: Overview and classification. In Proceedings of the 16th International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS’16). IEEE, 235–244.Google ScholarCross Ref
- Michael Witterauf, Frank Hannig, and Jürgen Teich. 2019. Polyhedral fragments: An efficient representation for symbolically generating code for processor arrays. In Proceedings of the 17th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE’19). ACM.Google ScholarDigital Library
- Michael Witterauf, Alexandru Tanase, Frank Hannig, and Jürgen Teich. 2016. Modulo scheduling of symbolically tiled loops for tightly coupled processor arrays. In Proceedings of the IEEE 27th International Conference on Application-specific Systems, Architectures and Processors (ASAP’16). IEEE, 58–66.Google ScholarCross Ref
Index Terms
- Symbolic Loop Compilation for Tightly Coupled Processor Arrays
Recommendations
Joint affine transformation and loop pipelining for mapping nested loop on CGRAs
DATE '15: Proceedings of the 2015 Design, Automation & Test in Europe Conference & ExhibitionCoarse-Grained Reconfigurable Architectures (CGRAs) are the promising architectures with high performance, high power- efficiency and attractions of flexibility. The computation-intensive portions of application, i.e. loops, are often implemented on ...
A polyhedral compilation framework for loops with dynamic data-dependent bounds
CC 2018: Proceedings of the 27th International Conference on Compiler ConstructionWe study the parallelizing compilation and loop nest optimization of an important class of programs where counted loops have a dynamic data-dependent upper bound. Such loops are amenable to a wider set of transformations than general while loops with ...
Symbolic Mapping of Loop Programs onto Processor Arrays
In this paper, we present a solution to the problem of joint tiling and scheduling a given loop nest with uniform data dependencies symbolically. This challenge arises when the size and number of available processors for parallel loop execution is not ...
Comments