当前位置: X-MOL 学术IEEE Trans. Comput. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
Towards the Integration of Reverse Converters into the RNS Channels
IEEE Transactions on Computers ( IF 3.6 ) Pub Date : 2020-03-01 , DOI: 10.1109/tc.2019.2948335
Leonel Sousa , Rogerio Paludo , Paulo Martins , Hector Pettenghi

The conversion from a Residue Number System (RNS) to a weighted representation is a costly inter-modulo operation that introduces delay and area overhead to RNS processors, while also increasing power consumption. This paper proposes a new approach to decompose the reverse conversion into operations that can be processed by the arithmetic units already present in the RNS independent channels. This leads to a more effective reuse of the processor circuitry while enhancing parallelism. Experimental results show that, when the proposed techniques are applied to architectures based on ripple-carry adders for the traditional 3-moduli set, the delay is improved in average by 16 percent, the circuit area by 36 percent and the power consumption by 47 percent. When carry-lookahead adder topologies are considered, these improvements are in average of 45 percent for the circuit area and 58 percent for the power consumption while the delay is only slightly reduced. The proposed techniques are applied to a use case in digital filtering, showing an increase in throughput/area of up to 1.25 times, and average reductions in energy consumption of 15.6 percent. This work is a step forward to the usage of RNS in practice, since reverse conversion underpins other hard inter-modulo operations, like comparison, scaling and division.

中文翻译:

将反向转换器集成到 RNS 通道中

从残数系统 (RNS) 到加权表示的转换是一项代价高昂的模间运算,它会给 RNS 处理器带来延迟和面积开销,同时还会增加功耗。本文提出了一种新方法,将反向转换分解为可由 RNS 独立通道中已经存在的算术单元处理的操作。这导致处理器电路的更有效重用,同时增强并行性。实验结果表明,将所提技术应用于基于纹波进位加法器的传统3模集架构时,延迟平均提高16%,电路面积提高36%,功耗降低47%。 . 当考虑进位超前加法器拓扑时,这些改进平均电路面积提高了 45%,功耗提高了 58%,而延迟仅略有减少。所提出的技术应用于数字滤波中的一个用例,显示吞吐量/面积增加了 1.25 倍,平均能耗降低了 15.6%。这项工作是在实践中使用 RNS 向前迈出的一步,因为反向转换支持其他硬模间运算,如比较、缩放和除法。
更新日期:2020-03-01
down
wechat
bug