Skip to main content
Log in

State Primitive Learning to Overcome Catastrophic Forgetting in Robotics

  • Published:
Cognitive Computation Aims and scope Submit manuscript

Abstract

People can learn continuously a wide range of tasks without catastrophic forgetting. To mimic this functioning of continual learning, current methods mainly focus on studying a one-step supervised learning problem, e.g., image classification. They aim to retain the performance of previous image classification results when neural networks are sequentially trained on new images. In this paper, we concentrate on solving multi-step robotic tasks sequentially with the proposed architecture called state primitive learning. By projecting the original state space into a low-dimensional representation, meaningful state primitives can be generated to describe tasks. Under two kinds of different constraints on the generation of state primitives, control signals corresponding to different robotic tasks can be separately addressed only with an efficient linear regression. Experiments on several robotic manipulation tasks demonstrate the new method efficacy to learn control signals under the scenario of continual learning, delivering substantially improved performance over the other comparison methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Notes

  1. \(F_{pq}=E_{y\in D}[\frac{\partial \log f(y,\theta )}{\partial \theta _p} \frac{\partial \log f(y,\theta )}{\partial \theta _q}]\)

References

  1. Caruana R. Multitask learning. Machine learning. 1997;28(1):41–75.

    Article  MathSciNet  Google Scholar 

  2. French RM. Catastrophic forgetting in connectionist networks. Trends Cogn Sci. 1999;3(4):128–35.

    Article  Google Scholar 

  3. Girshick R, Donahue J, Darrell T, Malik J. Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition, 2014. p. 580–587.

  4. Gupta A, Devin C, Liu Y, Abbeel P, Levine, Learning invariant feature spaces to transfer skills with reinforcement learning. Proceedings of the International Conference on Learning Representations. (ICLR). 2017.

  5. Kim E, Huang K, Jegelka S, Olivetti E. Virtual screening of inorganic materials synthesis parameters with deep learning. Npj Comput Mater. 2017;3(1):53.

    Article  Google Scholar 

  6. Kingma DP, Ba J. Adam. A method for stochastic optimization 2015.

  7. Kingma DP, Welling M. Auto-encoding variational bayes. In: Proceedings of the International Conference on Learning Representations (ICLR) 2014.

  8. Kirkpatrick J, Pascanu R, Rabinowitz N, Veness J, Desjardins G, Rusu AA, Milan K, Quan, J, Ramalho T, Grabska-Barwinska A, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 2017. p. 201611835.

  9. LeCun Y, Bengio Y, Hinton G. Deep learning. Nature. 2015;521(7553):436.

    Article  Google Scholar 

  10. Lee JH. Dynmat, a network that can learn after learning. Neural Netw. 2019;116:88–100.

    Article  Google Scholar 

  11. Lesort T, Díaz-Rodríguez N, Goudou JF, Filliat D. State representation learning for control: An overview. Neural Netw. 2018;108:379–92.

    Article  Google Scholar 

  12. Li W, Todorov E. Iterative linear quadratic regulator design for nonlinear biological movement systems. ICINCO. 2004;1:222–9.

    Google Scholar 

  13. Li Z, Hoiem D. Learning without forgetting. IEEE Trans Pattern Anal Mach Intell. 2017;40(12):2935–47.

    Article  Google Scholar 

  14. McCloskey M, Cohen NJ. Catastrophic interference in connectionist networks: The sequential learning problem. In: Psychology of learning and motivation. 1989;24:109–165. Elsevier.

  15. Michalski RS, Carbonell JG, Mitchell TM. Machine learning: An artificial intelligence approach. Springer Science & Business Media. 2013.

  16. Montgomery WH, Levine S. Guided policy search via approximate mirror descent. In: Advances in Neural Information Processing Systems. 2016;4008–4016.

  17. Pandarinath C, OShea DJ, Collins J, Jozefowicz R, Stavisky SD, Kao JC, Trautmann, EM, Kaufman MT, Ryu SI, Hochberg LR, et al. Inferring single-trial neural population dynamics using sequential auto-encoders. Nature methods. 2018. p. 1

  18. Parisi GI, Kemker R, Part JL, Kanan C, Wermter S. Continual lifelong learning with neural networks: A review. Neural Netw. 2019.

  19. Sadtler PT, Quick KM, Golub MD, Chase SM, Ryu SI, Tyler-Kabara EC, Byron MY, Batista AP. Neural constraints on learning. Nature. 2014;512(7515):423.

    Article  Google Scholar 

  20. Shenoy KV, Sahani M, Churchland MM. Cortical control of arm movements: a dynamical systems perspective. Annu Rev Neurosci. 2013;36:337–59.

    Article  Google Scholar 

  21. Sutskever I, Martens J, Dahl G, Hinton G. On the importance of initialization and momentum in deep learning. In: International conference on machine learning. 2013. p. 1139–1147.

  22. Thrun S. Lifelong learning algorithms. Learning to learn. 1998;8:181–209.

    Google Scholar 

  23. Todorov E, Erez T, Tassa Y. Mujoco: A physics engine for model-based control. In: Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on, IEEE. 2012. p. 5026–5033.

  24. Umiltà M, Intskirveli I, Grammont F, Rochat M, Caruana F, Jezzini A, Gallese V, Rizzolatti G, et al. When pliers become fingers in the monkey motor system. Proc Natl Acad Sc. 2008;105(6):2209–13.

    Article  Google Scholar 

  25. Xiong F, Sun B, Yang X, Qiao H, Huang K, Hussain A, Liu Z. Guided policy search for sequential multitask learning. IEEE Trans Syst Man Cybern Syst. 2018;49(1):216–26.

  26. Yang X, Huang K, Zhang R, Hussain A. Learning latent features with infinite nonnegative binary matrix trifactorization. IEEETrans Emerg Top Comput Intell. 2018;99:1–14.

    Google Scholar 

  27. Zeng G, Chen Y, Cui B, Yu S. Continuous learning of context-dependent processing in neural networks. arXiv preprint arXiv:1810.01256 2018.

  28. Zeng G, Chen Y, Cui B, Yu S. Continual learning of context-dependent processing in neural networks. Nature Machine Intelligence. 2019.

Download references

Funding

This work is supported by National Key Research and Development Plan of China grant 2017YFB1300202, NSFC grants U1613213, 61375005, 61503383, 61210009, 61876155, the Strategic Priority Research Program of Chinese Academy of Science under Grant XDB32050100, Dongguan core technology research frontier project (2019622101001) and Natural Science Foundation of Jiangsu Province (BK20181189). The work is also supported by the Strategic Priority Research Program of the CAS (Grant XDB02080003) and Key Program Special Fund in XJTLU (KSF-A-01, KSF-E-26 and KSF-P-02).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zhiyong Liu.

Ethics declarations

Conflicts of Interest

The authors declare that they have no conflict of interest.

Ethical Approval

This article does not contain any studies with human participants or animals performed by any of the authors.

Informed Consent

Informed consent was obtained from all individual participants included in the study.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Xiong, F., Liu, Z., Huang, K. et al. State Primitive Learning to Overcome Catastrophic Forgetting in Robotics. Cogn Comput 13, 394–402 (2021). https://doi.org/10.1007/s12559-020-09784-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12559-020-09784-8

Keywords

Navigation