当前位置: X-MOL 学术Comput. Intell. Neurosci. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
An Empirical Investigation of Transfer Effects for Reinforcement Learning
Computational Intelligence and Neuroscience Pub Date : 2020-12-16 , DOI: 10.1155/2020/8873057
Jung-Sing Jwo, Ching-Sheng Lin, Cheng-Hsiung Lee, Ya-Ching Lo

Previous studies have shown that training a reinforcement model for the sorting problem takes very long time, even for small sets of data. To study whether transfer learning could improve the training process of reinforcement learning, we employ Q-learning as the base of the reinforcement learning algorithm, apply the sorting problem as a case study, and assess the performance from two aspects, the time expense and the brain capacity. We compare the total number of training steps between nontransfer and transfer methods to study the efficiencies and evaluate their differences in brain capacity (i.e., the percentage of the updated Q-values in the Q-table). According to our experimental results, the difference in the total number of training steps will become smaller when the size of the numbers to be sorted increases. Our results also show that the brain capacities of transfer and nontransfer reinforcement learning will be similar when they both reach a similar training level.

中文翻译:

强化学习的转移效应的实证研究

先前的研究表明,即使对于少量数据集,训练用于排序问题的增强模型也需要很长时间。为了研究转移学习是否可以改善强化学习的训练过程,我们以Q学习作为强化学习算法的基础,将排序问题作为案例研究,并从时间开销和学习效率两个方面评估性能。脑容量。我们比较了非转移方法和转移方法之间训练步骤的总数,以研究效率并评估它们在脑容量方面的差异(即,Q表中更新的Q值的百分比)。根据我们的实验结果,随着要排序的数字大小的增加,训练步骤总数的差异将变得更小。
更新日期:2020-12-16
down
wechat
bug