当前位置: X-MOL 学术ACM Trans. Archit. Code Optim. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Dynamic Precision Autotuning with TAFFO
ACM Transactions on Architecture and Code Optimization ( IF 1.6 ) Pub Date : 2020-05-30 , DOI: 10.1145/3388785
Stefano Cherubin 1 , Daniele Cattaneo 1 , Michele Chiari 1 , Giovanni Agosta 1
Affiliation  

Many classes of applications, both in the embedded and high performance domains, can trade off the accuracy of the computed results for computation performance. One way to achieve such a trade-off is precision tuning—that is, to modify the data types used for the computation by reducing the bit width, or by changing the representation from floating point to fixed point. We present a methodology for high-accuracy dynamic precision tuning based on the identification of input classes (i.e., classes of input datasets that benefit from similar optimizations). When a new input region is detected, the application kernels are re-compiled on the fly with the appropriate selection of parameters. In this way, we obtain a continuous optimization approach that enables the exploitation of the reduced precision computation while progressively exploring the solution space, thus reducing the time required by compilation overheads. We provide tools to support the automation of the runtime part of the solution, leaving to the user only the task of identifying the input classes. Our approach provides a significant performance boost (up to 320%) on the typical approximate computing benchmarks, without meaningfully affecting the accuracy of the result, since the error remains always below 3%.

中文翻译:

使用 TAFFO 进行动态精确自动调谐

许多类别的应用程序,无论是在嵌入式领域还是在高性能领域,都可以在计算结果的准确性与计算性能之间进行权衡。实现这种折衷的一种方法是精度调整——即通过减小位宽或将表示形式从浮点更改为定点来修改用于计算的数据类型。我们提出了一种基于输入类别(即受益于类似优化的输入数据集类别)识别的高精度动态精度调整方法。当检测到新的输入区域时,应用程序内核会使用适当的参数选择重新编译。这样,我们获得了一种连续优化方法,可以在逐步探索解决方案空间的同时利用降低的精度计算,从而减少编译开销所需的时间。我们提供工具来支持解决方案的运行时部分的自动化,只留给用户识别输入类的任务。我们的方法在典型的近似计算基准上提供了显着的性能提升(高达 320%),而不会显着影响结果的准确性,因为误差始终低于 3%。
更新日期:2020-05-30
down
wechat
bug