skip to main content
research-article

Symbolic Loop Compilation for Tightly Coupled Processor Arrays

Published:29 July 2021Publication History
Skip Abstract Section

Abstract

Tightly Coupled Processor Arrays (TCPAs), a class of massively parallel loop accelerators, allow applications to offload computationally expensive loops for improved performance and energy efficiency. To achieve these two goals, executing a loop on a TCPA requires an efficient generation of specific programs as well as other configuration data for each distinct combination of loop bounds and number of available processing elements (PEs). Since both these parameters are generally unknown at compile time—the number of available PEs due to dynamic resource management, and the loop bounds, because they depend on the problem size—both the programs and configuration data must be generated at runtime. However, pure just-in-time compilation is impractical, because mapping a loop program onto a TCPA entails solving multiple NP-complete problems.

As a solution, this article proposes a unique mixed static/dynamic approach called symbolic loop compilation. It is shown that at compile time, the NP-complete problems (modulo scheduling, register allocation, and routing) can still be solved to optimality in a symbolic way resulting in a so-called symbolic configuration, a space-efficient intermediate representation parameterized in the loop bounds and number of PEs. This phase is called symbolic mapping. At runtime, for each requested accelerated execution of a loop program with given loop bounds and known number of available PEs, a concrete configuration, including PE programs and configuration data for all other components, is generated from the symbolic configuration according to these parameter values. This phase is called instantiation. We describe both phases in detail and show that instantiation runs in polynomial time with its most complex step, program instantiation, not directly depending on the number of PEs and thus scaling to arbitrary sizes of TCPAs.

To validate the efficiency of this mixed static/dynamic compilation approach, we apply symbolic loop compilation to a set of real-world loop programs from several domains, measuring both compilation time and space requirements. Our experiments confirm that a symbolic configuration is a space-efficient representation suited for systems with little memory—in many cases, a symbolic configuration is smaller than even a single concrete configuration instantiated from it—and that the times for the runtime phase of program instantiation and configuration loading are negligible and moreover independent of the size of the available processor array. To give an example, instantiating a configuration for a matrix-matrix multiplication benchmark takes equally long for 4× 4 and 32× 32 PEs.

References

  1. Brent Bohnenstiehl, Aaron Stillmaker, Jon Pimentel, Timothy Andreas, Bin Liu, Anh Tran, Emmanuel Adeagbo, and Bevan Baas. 2017. KiloCore: A fine-grained 1,000-processor array for task parallel applications. IEEE Micro 37, 2 (3 2017), 63–69.Google ScholarGoogle Scholar
  2. Srinivas Boppu. 2015. Code Generation for Tightly Coupled Processor Arrays. Ph.D. Dissertation. Friedrich-Alexander-Universität Erlangen-Nürnberg.Google ScholarGoogle Scholar
  3. Srinivas Boppu, Frank Hannig, and Jürgen Teich. 2013. Loop program mapping and compact code generation for programmable hardware accelerators. In Proceedings of the IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors. 10–17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Srinivas Boppu, Frank Hannig, and Jürgen Teich. 2014. Compact code generation for tightly-coupled processor arrays. J. Sign. Process. Syst. 77, 1–2 (2014), 5–29.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Marcel Brand, Frank Hannig, Alexandru Tanase, and Jürgen Teich. 2017. Orthogonal instruction processing: An alternative to lightweight VLIW processors. In Proceedings of the IEEE 11th International Symposium on Embedded Multicore/Many-core Systems-on-Chip (MCSoC’17). IEEE, 5–12.Google ScholarGoogle ScholarCross RefCross Ref
  6. Juan Manuel Martinez Caamaño, Willy Wolff, and Philippe Clauss. 2016. Code bones: Fast and flexible code generation for dynamic and speculative polyhedral optimization. In Proceedings of the 22nd European Conference on Parallel Processing (Euro-Par’16). Springer, 225–237.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Gregory Chaitin, Marc Auslander, Ashok Chandra, John Cocke, Martin Hopkins, and Peter Markstein. 1981. Register allocation via coloring. Comput. Lang. 6, 1 (1981), 47–57.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Benoît Dupont De Dinechin, Renaud Ayrignac, Pierre-Edouard Beaucamps, Patrice Couvert, Benoît Ganne, Pierre Guironnet de Massas, François Jacquet, Samuel Jones, Nicolas Morey Chaisemartin, Frédéric Riss, et al. 2013. A clustered manycore processor architecture for embedded and accelerated applications. In Proceedings of the 2013 IEEE High Performance Extreme Computing Conference (HPEC’13). IEEE, 1–6.Google ScholarGoogle ScholarCross RefCross Ref
  9. Paul Feautrier. 1991. Dataflow analysis of array and scalar references. Int. J. Parallel Program. 20, 1 (1991), pages 23–53.Google ScholarGoogle ScholarCross RefCross Ref
  10. Frank Hannig. 2009. Scheduling Techniques for High-Throughput Loop Accelerators. Dissertation. University of Erlangen-Nürnberg, Germany.Google ScholarGoogle Scholar
  11. Frank Hannig, Vahid Lari, Srinivas Boppu, Alexandru Tanase, and Oliver Reiche. 2014. Invasive tightly-coupled processor arrays: A domain-specific architecture/compiler co-design approach. ACM Trans. Embed. Comput. Syst. 13, 4s (2014), pages 1–29.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Albert Hartono, Muthu Manikandan Baskaran, J. Ramanujam, and P. Sadayappan. 2010. DynTile: Parametric tiled loop generation for parallel execution on multicore processors. In Proceedings of the IEEE International Symposium on Parallel Distributed Processing (IPDPS’10). IEEE, pages 1–12.Google ScholarGoogle Scholar
  13. Alexandra Jimborean, Philippe Clauss, Jean-François Dollinger, Vincent Loechner, and Juan Manuel Martinez Caamaño. 2014. Dynamic and speculative polyhedral parallelization using compiler-generated skeletons. Int. J. Parallel Program. 42, 4 (Aug. 2014), pages 529–545.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. M. Karunaratne, A. K. Mohite, T. Mitra, and L. Peh. 2017. HyCUBE: A CGRA with reconfigurable single-cycle multi-hop interconnect. In Proceedings of the 2017 54th ACM/EDAC/IEEE Design Automation Conference (DAC’17). 1–6. DOI:https://doi.org/10.1145/3061639.3062262Google ScholarGoogle Scholar
  15. DaeGon Kim and Sanjay Rajopadhye. 2009. Efficient tiled loop generation: D-tiling. In Languages and Compilers for Parallel Computing. Springer, pages 293–307.Google ScholarGoogle Scholar
  16. Dmitrij Kissler, Frank Hannig, Alexey Kupriyanov, and Jürgen Teich. 2006. A dynamically reconfigurable weakly programmable processor array architecture template. In Proceedings of the 2nd International Workshop on Reconfigurable Communication Centric System-on-Chips (ReCoSoC’06). IEEE, 31–37.Google ScholarGoogle Scholar
  17. Martin Kong, Richard Veras, Kevin Stock, Franz Franchetti, Louis-Noël Pouchet, and P. Sadayappan. 2013. When polyhedral transformations meet SIMD code generation. In Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI’13). ACM, 127–138.Google ScholarGoogle Scholar
  18. Athanasios Konstantinidis, Paul Kelly, J. Ramanujam, and Ponnuswamy Sadayappan. 2014. Parametric GPU code generation for affine loop programs. In Languages and Compilers for Parallel Computing, Vol. 8664. Springer, 136–151.Google ScholarGoogle Scholar
  19. Harry Nelis and Ed Deprettere. 1988. Automatic design and partitioning of systolic/wavefront arrays for VLSI. Circ. Syst. Sign. Process. 7, 2 (1988), 235–252.Google ScholarGoogle ScholarCross RefCross Ref
  20. Bantwal Ramakrishna Rau and Christopher Glaeser. 1981. Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing. In Proceedings of the 14th Annual Workshop on Microprogramming. IEEE, pages 183–198.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Jürgen Teich. 1993. A Compiler for Application Specific Processor Arrays. Shaker, Aachen, Germany.Google ScholarGoogle Scholar
  22. Jürgen Teich, Alexandru Tanase, and Frank Hannig. 2013. Symbolic parallelization of loop programs for massively parallel processor arrays. In Proceedings of the IEEE 24th International Conference on Application-Specific Systems, Architectures and Processors (ASAP’13). IEEE, 1–9.Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Jürgen Teich, Alexandru Tanase, and Frank Hannig. 2014. Symbolic mapping of loop programs onto processor arrays. J. Sign. Process. Syst. 77, 1 (2014), 31–59.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Jürgen Teich and Lothar Thiele. 1991. Control generation in the design of processor arrays. J. VLSI Sign. Process. Syst. Sign. Image Vid. Technol. 3, 1–2 (1991), 77–92.Google ScholarGoogle Scholar
  25. Jürgen Teich, Lothar Thiele, and Li Zhang. 1996. Scheduling of partitioned regular algorithms on processor arrays with constrained resources. In Proceedings of the International Conference on Application-Specific Systems, Architectures, and Processors (ASAP’96). 131–144.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Lothar Thiele. 1989. On the design of piecewise regular processor arrays. In Proceedings of the IEEE International Symposium on Circuits and Systems. IEEE, pages 2239–2242.Google ScholarGoogle ScholarCross RefCross Ref
  27. Lothar Thiele and Vwani Roychowdhury. 1991. Systematic design of local processor arrays for numerical algorithms. Algorithms and Parallel VLSI Architectures Vol. A: Tutorials (1991), 329–339.Google ScholarGoogle Scholar
  28. Sven Verdoolaege. 2010. isl: An integer set library for the polyhedral model. In Proceedings of the International Congress on Mathematical Software. Springer, pages 299–302.Google ScholarGoogle ScholarCross RefCross Ref
  29. Mark Wijtvliet, Luc Waeijen, and Henk Corporaal. 2016. Coarse grained reconfigurable architectures in the past 25 years: Overview and classification. In Proceedings of the 16th International Conference on Embedded Computer Systems: Architectures, Modeling and Simulation (SAMOS’16). IEEE, 235–244.Google ScholarGoogle ScholarCross RefCross Ref
  30. Michael Witterauf, Frank Hannig, and Jürgen Teich. 2019. Polyhedral fragments: An efficient representation for symbolically generating code for processor arrays. In Proceedings of the 17th ACM-IEEE International Conference on Formal Methods and Models for System Design (MEMOCODE’19). ACM.Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Michael Witterauf, Alexandru Tanase, Frank Hannig, and Jürgen Teich. 2016. Modulo scheduling of symbolically tiled loops for tightly coupled processor arrays. In Proceedings of the IEEE 27th International Conference on Application-specific Systems, Architectures and Processors (ASAP’16). IEEE, 58–66.Google ScholarGoogle ScholarCross RefCross Ref

Index Terms

  1. Symbolic Loop Compilation for Tightly Coupled Processor Arrays

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in

        Full Access

        • Published in

          cover image ACM Transactions on Embedded Computing Systems
          ACM Transactions on Embedded Computing Systems  Volume 20, Issue 5
          September 2021
          342 pages
          ISSN:1539-9087
          EISSN:1558-3465
          DOI:10.1145/3468851
          • Editor:
          • Tulika Mitra
          Issue’s Table of Contents

          Copyright © 2021 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 29 July 2021
          • Accepted: 1 May 2021
          • Revised: 1 March 2021
          • Received: 1 November 2020
          Published in tecs Volume 20, Issue 5

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article
          • Research
          • Refereed

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format .

        View HTML Format