当前位置: X-MOL 学术IEEE Trans. Circuit Syst. II Express Briefs › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
A Deep Reinforcement Learning Framework for High-Dimensional Circuit Linearization
IEEE Transactions on Circuits and Systems II: Express Briefs ( IF 4.4 ) Pub Date : 2022-06-15 , DOI: 10.1109/tcsii.2022.3183156
Chao Rong 1 , Jeyanandh Paramesh 1 , L. Richard Carley 1
Affiliation  

Despite the successes of Reinforcement Learning (RL) in recent years, tasks that require exploring over long trajectories with limited feedback and searching in high-dimensional space remain challenging. This brief proposes a deep RL framework for high-dimensional circuit linearization with an efficient exploration strategy leveraging a scaled dot-product attention scheme and search on the replay technique. As a proof of concept, a 5-bit digital-to-time converter (DTC) is built as the environment, and an RL agent learns to tune the calibration words of the delay stages to minimize the integral nonlinearity (INL) with only scalar feedback. The policy network which selects calibration words is trained by the Soft Actor-Critic (SAC) algorithm. Our results show that the proposed RL framework can reduce the INL to less than 0.5 LSB within 60, 000 trials, which is much smaller than the size of searching space.

中文翻译:

用于高维电路线性化的深度强化学习框架

尽管近年来强化学习 (RL) 取得了成功,但需要在有限反馈的情况下探索长轨迹并在高维空间中搜索的任务仍然具有挑战性。本简报提出了一种用于高维电路线性化的深度 RL 框架,该框架具有有效的探索策略,利用缩放的点积注意力方案和对重放技术的搜索。作为概念验证,构建了一个 5 位数字时间转换器 (DTC) 作为环境,并且 RL 代理学习调整延迟级的校准字以最小化仅使用标量的积分非线性 (INL)反馈。选择校准词的策略网络由 Soft Actor-Critic (SAC) 算法训练。我们的结果表明,所提出的 RL 框架可以在 60 以内将 INL 降低到 0.5 LSB 以下,
更新日期:2022-06-15
down
wechat
bug