Skip to main content
Log in

A review on the long short-term memory model

  • Published:
Artificial Intelligence Review Aims and scope Submit manuscript

Abstract

Long short-term memory (LSTM) has transformed both machine learning and neurocomputing fields. According to several online sources, this model has improved Google’s speech recognition, greatly improved machine translations on Google Translate, and the answers of Amazon’s Alexa. This neural system is also employed by Facebook, reaching over 4 billion LSTM-based translations per day as of 2017. Interestingly, recurrent neural networks had shown a rather discrete performance until LSTM showed up. One reason for the success of this recurrent network lies in its ability to handle the exploding/vanishing gradient problem, which stands as a difficult issue to be circumvented when training recurrent or very deep neural networks. In this paper, we present a comprehensive review that covers LSTM’s formulation and training, relevant applications reported in the literature and code resources implementing this model for a toy example.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

References

  • Alahi A, Goel K, Ramanathan V, Robicquet A, Fei-Fei L, Savarese S (2016) Social LSTM: human trajectory prediction in crowded spaces. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 961–971

  • Álvaro F, Sánchez JA, Benedí JM (2016) An integrated grammar-based approach for mathematical expression recognition. Pattern Recognit 51:135–147

    MATH  Google Scholar 

  • Andersen RS, Peimankar A, Puthusserypady S (2019) A deep learning approach for real-time detection of atrial fibrillation. Expert Syst Appl 115:465–473

    Google Scholar 

  • Baddar WJ, Ro YM (2019) Mode variational LSTM robust to unseen modes of variation: application to facial expression recognition. Proc AAAI Conf Artif Intell 33:3215–3223

    Google Scholar 

  • Bao J, Liu P, Ukkusuri SV (2019) A spatiotemporal deep learning approach for citywide short-term crash risk prediction with multi-source data. Accid Anal Prev 122:239–254

    Google Scholar 

  • Barbieri F, Anke LE, Camacho-Collados J, Schockaert S, Saggion H (2018) Interpretable emoji prediction via label-wise attention LSTMs. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 4766–4771

  • Bayer J, Wierstra D, Togelius J, Schmidhuber J (2009) Evolving memory cell structures for sequence learning. In: International conference on artificial neural networks, pp 755–764. Springer

  • Bellec G, Salaj D, Subramoney A, Legenstein R, Maass W (2018) Long short-term memory and learning-to-learn in networks of spiking neurons. In: Advances in neural information processing systems, pp 787–797

  • Bhunia AK, Konwer A, Bhunia AK, Bhowmick A, Roy PP, Pal U (2019) Script identification in natural scene image and video frames using an attention based convolutional-LSTM network. Pattern Recognit 85:172–184

    Google Scholar 

  • Bilakhia S, Petridis S, Nijholt A, Pantic M (2015) The MAHNOB mimicry database: a database of naturalistic human interactions. Pattern Recognit Lett 66:52–61 Pattern Recognition in Human Computer Interaction

    Google Scholar 

  • Brattoli B, Buchler U, Wahl AS, Schwab ME, Ommer B (2017) LSTM self-supervision for detailed behavior analysis. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 6466–6475

  • Cai AZ, Li BL, Hu CY, Luo DW, Lin EC (2019) Automated groove identification and measurement using long short-term memory unit. Measurement 141:152–161

    Google Scholar 

  • Cen Z, Wang J (2019) Crude oil price prediction model with long short term memory deep learning based on prior knowledge data transfer. Energy 169:160–171

    Google Scholar 

  • Chen L, He Y, Fan L (2017a) Let the robot tell: describe car image with natural language via LSTM. Pattern Recognit Lett 98:75–82

    Google Scholar 

  • Chen M, Ding G, Zhao S, Chen H, Liu Q, Han J (2017b) Reference based LSTM for image captioning. In: 31st AAAI conference on artificial intelligence

  • Chen Y, Yang J, Qian J (2017c) Recurrent neural network for facial landmark detection. Neurocomputing 219:26–38

    Google Scholar 

  • Chen B, Li P, Sun C, Wang D, Yang G, Lu H (2019a) Multi attention module for visual tracking. Pattern Recognit 87:80–93

    Google Scholar 

  • Chen Y, Zhang S, Zhang W, Peng J, Cai Y (2019b) Multifactor spatio-temporal correlation model based on a combination of convolutional neural network and long short-term memory neural network for wind speed forecasting. Energy Convers Manag 185:783–799

    Google Scholar 

  • Chowdhury GG (2003) Natural language processing. Ann Rev Inf Sci Technol 37(1):51–89

    MathSciNet  Google Scholar 

  • Dabiri S, Heaslip K (2019) Developing a twitter-based traffic event detection model using deep learning architectures. Expert Syst Appl 118:425–439

    Google Scholar 

  • D’Andrea E, Ducange P, Bechini A, Renda A, Marcelloni F (2019) Monitoring the public opinion about the vaccination topic from tweets analysis. Expert Syst Appl 116:209–226

    Google Scholar 

  • Eck D, Schmidhuber J (2002) Finding temporal structure in music: blues improvisation with LSTM recurrent networks. In: Proceedings of the 12th IEEE workshop on neural networks for signal processing. IEEE, pp 747–756

  • Elsheikh A, Yacout S, Ouali MS (2019) Bidirectional handshaking LSTM for remaining useful life prediction. Neurocomputing 323:148–156

    Google Scholar 

  • Fan H, Zhu L, Yang Y (2019) Cubic LSTMs for video prediction. In: Proceedings of the AAAI conference on artificial intelligence, vol 33

  • Fayek HM, Lech M, Cavedon L (2017) Evaluating deep learning architectures for speech emotion recognition. Neural Netw 92:60–68 Advances in Cognitive Engineering Using Neural Networks

    Google Scholar 

  • Feng F, Liu X, Yong B, Zhou R, Zhou Q (2019a) Anomaly detection in ad-hoc networks based on deep learning model: a plug and play device. Ad Hoc Netw 84:82–89

    Google Scholar 

  • Feng Y, Ma L, Liu W, Luo J (2019b) Spatio-temporal video re-localization by warp LSTM. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1288–1297

  • Fernández S, Graves A, Schmidhuber J (2007) An application of recurrent neural networks to discriminative keyword spotting. In: International conference on artificial neural networks. Springer, pp 220–229

  • Fischer T, Krauss C (2018) Deep learning with long short-term memory networks for financial market predictions. Eur J Oper Res 270(2):654–669

    MathSciNet  MATH  Google Scholar 

  • Frinken V, Fischer A, Baumgartner M, Bunke H (2014) Keyword spotting for self-training of BLSTM NN based handwriting recognition systems. Pattern Recognit 47(3):1073–1082 Handwriting Recognition and other PR Applications

    Google Scholar 

  • Gao H, Mao J, Zhou J, Huang Z, Wang L, Xu W (2015) Are you talking to a machine? Dataset and methods for multilingual image question. In: Advances in neural information processing systems, pp 2296–2304

  • Gers F, Schmidhuber J (2000) Recurrent nets that time and count. Proc Int Joint Conf Neural Netw 3:189–194

    Google Scholar 

  • Gers FA, Schmidhuber E (2001) LSTM recurrent networks learn simple context-free and context-sensitive languages. IEEE Trans Neural Netw 12(6):1333–1340

    Google Scholar 

  • Gers F, Schmidhuber J, Cummins F (2000) Learning to forget: continual prediction with LSTM. Neural Comput 12:2451–71

    Google Scholar 

  • Gers FA, Pérez-Ortiz JA, Eck D, Schmidhuber J (2002) Learning context sensitive languages with LSTM trained with Kalman filters. In: International conference on artificial neural networks. Springer, pp 655–660

  • Gong J, Chen X, Gui T, Qiu X (2019) Switch-lstms for multi-criteria Chinese word segmentation. Proc AAAI Conf Artif Intell 33:6457–6464

    Google Scholar 

  • Graves A, Schmidhuber J (2005a) Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Netw 18(5):602–610 (IJCNN 2005)

    Google Scholar 

  • Graves A, Schmidhuber J (2005b) Framewise phoneme classification with bidirectional LSTM networks. In: Proceedings of 2005 IEEE international joint conference on neural networks, 2005, vol 4, pp 2047–2052. https://doi.org/10.1109/IJCNN.2005.1556215

  • Graves A, Schmidhuber J (2009) Offline handwriting recognition with multidimensional recurrent neural networks. In: Koller D, Schuurmans D, Bengio Y, Bottou L (eds) Advances in neural information processing systems, vol 21. Curran Associates, Inc, Red Hook, pp 545–552

    Google Scholar 

  • Graves A, Eck D, Beringer N, Schmidhuber J (2004) Biologically plausible speech recognition with LSTM neural nets. In: International workshop on biologically inspired approaches to advanced information technology. Springer, pp 127–136

  • Graves A, Fernández S, Schmidhuber J (2007) Multi-dimensional recurrent neural networks. In: International conference on artificial neural networks. Springer, pp 549–558

  • Greff K, Srivastava RK, Koutnik J, Steunebrink BR, Schmidhuber J (2017) LSTM: a search space odyssey. IEEE Trans Neural Netw Learn Syst 28(10):2222–2232

    MathSciNet  Google Scholar 

  • Guo D, Zhou W, Li H, Wang M (2018) Hierarchical LSTM for sign language translation. In: 32nd AAAI conference on artificial intelligence

  • He Z, Gao S, Xiao L, Liu D, He H, Barber D (2017) Wider and deeper, cheaper and faster: tensorized LSTMs for sequence learning. In: Advances in neural information processing systems, pp 1–11

  • He X, Shi B, Bai X, Xia GS, Zhang Z, Dong W (2019) Image caption generation with part of speech guidance. Pattern Recognit Lett 119:229–237 Deep Learning for Pattern Recognition

    Google Scholar 

  • Hochreiter S (1991) Untersuchungen zu dynamischen neuronalen netzen, vol 91, no 1. Diploma, Technische Universität München

  • Hochreiter S, Schmidhuber J (1997a) Long short-term memory. Neural Comput 9:1735–80

    Google Scholar 

  • Hochreiter S, Schmidhuber J (1997b) LSTM can solve hard long time lag problems. In: Mozer MC, Jordan MI, Petsche T (eds) Advances in neural information processing systems, vol 9, pp 473–479. MIT Press, Cambridge

  • Homayoun S, Dehghantanha A, Ahmadzadeh M, Hashemi S, Khayami R, Choo KKR, Newton DE (2019) DRTHIS: deep ransomware threat hunting and intelligence system at the fog layer. Future Gener Comput Syst 90:94–104

    Google Scholar 

  • Hong J, Wang Z, Yao Y (2019) Fault prognosis of battery system based on accurate voltage abnormity prognosis using long short-term memory neural networks. Appl Energy 251:113381

    Google Scholar 

  • Hori T, Wang W, Koji Y, Hori C, Harsham B, Hershey JR (2019) Adversarial training and decoding strategies for end-to-end neural conversation models. Comput Speech Lang 54:122–139

    Google Scholar 

  • Horsmann T, Zesch T (2017) Do LSTMs really work so well for pos tagging?—a replication study. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp 727–736

  • Hou Q, Wang J, Bai R, Zhou S, Gong Y (2018) Face alignment recurrent network. Pattern Recognit 74:448–458

    Google Scholar 

  • Huang KY, Wu CH, Su MH (2019a) Attention-based convolutional neural network and long short-term memory for short-term detection of mood disorders based on elicited speech responses. Pattern Recognit 88:668–678

    Google Scholar 

  • Huang Y, Shen L, Liu H (2019b) Grey relational analysis, principal component analysis and forecasting of carbon emissions based on long short-term memory in China. J Clean Prod 209:415–423

    Google Scholar 

  • Kadari R, Zhang Y, Zhang W, Liu T (2018) CCG supertagging via bidirectional LSTM-CRF neural architecture. Neurocomputing 283:31–37

    Google Scholar 

  • Kafle K, Kanan C (2017) Visual question answering: datasets, algorithms, and future challenges. Comput Vis Image Underst 163:3–20 Language in Vision

    Google Scholar 

  • Kang J, Jang S, Li S, Jeong YS, Sung Y (2019) Long short-term memory-based malware classification method for information security. Comput Electr Eng 77:366–375

    Google Scholar 

  • Kanjo E, Younis EM, Ang CS (2019) Deep learning analysis of mobile physiological, environmental and location sensor data for emotion detection. Inf Fusion 49:46–56

    Google Scholar 

  • Kartsaklis D, Pilehvar MT, Collier N (2018) Mapping text to knowledge graph entities using multi-sense LSTMs. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 1959–1970

  • Kim B, Chung K, Lee J, Seo J, Koo MW (2019a) A bi-LSTM memory network for end-to-end goal-oriented dialog learning. Comput Speech Lang 53:217–230

    Google Scholar 

  • Kim S, Kang S, Ryu KR, Song G (2019b) Real-time occupancy prediction in a large exhibition hall using deep learning approach. Energy Build 199:216–222

    Google Scholar 

  • Kinghorn P, Zhang L, Shao L (2019) A hierarchical and regional deep learning architecture for image description generation. Pattern Recognit Lett 119:77–85 Deep Learning for Pattern Recognition

    Google Scholar 

  • Kolen JF, Kremer SC (2001) Gradient flow in recurrent nets: the difficulty of learning long-term dependencies. In: Kremer SC, Kolen JF (eds) A field guide to dynamical recurrent networks. Wiley-IEEE Press, Hoboken, 237–243. https://doi.org/10.1109/9780470544037.ch14

  • Kraus M, Feuerriegel S (2019) Sentiment analysis based on rhetorical structure theory: learning deep neural networks from discourse trees. Expert Syst Appl 118:65–79

    Google Scholar 

  • Kumar Srivastava R, Greff K, Schmidhuber J (2015) Training very deep networks. In: Neural information processing systems (NIPS 2015 Spotlight)

  • Laffitte P, Wang Y, Sodoyer D, Girin L (2019) Assessing the performances of different neural network architectures for the detection of screams and shouts in public transportation. Expert Syst Appl 117:29–41

    Google Scholar 

  • Lei J, Liu C, Jiang D (2019) Fault diagnosis of wind turbine based on long short-term memory networks. Renew Energy 133:422–432

    Google Scholar 

  • Li H, Xu H (2019) Video-based sentiment analysis with hvnLBP-TOP feature and bi-LSTM. Proc AAAI Conf Artif Intell 33:9963–9964

    Google Scholar 

  • Li P, Li Y, Xiong Q, Chai Y, Zhang Y (2014) Application of a hybrid quantized elman neural network in short-term load forecasting. Int J Electr Power Energy Syst 55:749–759

    Google Scholar 

  • Li X, Ye M, Liu Y, Zhang F, Liu D, Tang S (2017) Accurate object detection using memory-based models in surveillance scenes. Pattern Recognit 67:73–84

    Google Scholar 

  • Li F, Zhang M, Tian B, Chen B, Fu G, Ji D (2018a) Recognizing irregular entities in biomedical text via deep neural networks. Pattern Recognit Lett 105:105–113 Machine Learning and Applications in Artificial Intelligence

    Google Scholar 

  • Li Z, Gavrilyuk K, Gavves E, Jain M, Snoek CG (2018b) Videolstm convolves, attends and flows for action recognition. Comput Vis Image Underst 166:41–50

    Google Scholar 

  • Li X, Zhang L, Wang Z, Dong P (2019) Remaining useful life prediction for lithium-ion batteries based on a hybrid model combining the long short-term memory and elman neural networks. J Energy Storage 21:510–518

    Google Scholar 

  • Liu Y (2019) Novel volatility forecasting using deep learning-long short term memory recurrent neural networks. Expert Syst Appl 132:99–109

    Google Scholar 

  • Liu AA, Xu N, Wong Y, Li J, Su YT, Kankanhalli M (2017a) Hierarchical & multimodal video captioning: discovering and transferring multimodal knowledge for vision to language. Comput Vis Image Underst 163:113–125 Language in Vision

    Google Scholar 

  • Liu J, Wang G, Hu P, Duan LY, Kot AC (2017b) Global context-aware attention LSTM networks for 3d action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1647–1656

  • Liu Y, Jin X, Shen H (2019) Towards early identification of online rumors based on long short-term memory networks. Inf Process Manag 56(4):1457–1467

    Google Scholar 

  • Liwicki M, Bunke H (2009) Combining diverse on-line and off-line systems for handwritten text line recognition. Pattern Recognit 42(12):3254–3263 New Frontiers in Handwriting Recognition

    MATH  Google Scholar 

  • Lu Z, Tan H, Li W (2019) An evolutionary context-aware sequential model for topic evolution of text stream. Inf Sci 473:166–177

    Google Scholar 

  • Luo Y, Ren J, Wang Z, Sun W, Pan J, Liu J, Pang J, Lin L (2018) LSTM pose machines. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5207–5215

  • Lyu C, Liu Z, Yu L (2019) Block-sparsity recovery via recurrent neural network. Signal Proc 154:129–135

    Google Scholar 

  • Ma S, Sigal L, Sclaroff S (2016) Learning activity progression in LSTMs for activity detection and early detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1942–1950

  • Ma J, Ganchev K, Weiss D (2018a) State-of-the-art Chinese word segmentation with bi-LSTMs. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 4902–4908

  • Ma Y, Peng H, Cambria E (2018b) Targeted aspect-based sentiment analysis via embedding commonsense knowledge into an attentive LSTM. In: 32nd AAAI conference on artificial intelligence

  • Manashty A, Light J (2019) Life model: a novel representation of life-long temporal sequences in health predictive analytics. Future Gener Comput Syst 92:141–156

    Google Scholar 

  • McCarthy N, Karzand M, Lecue F (2019) Amsterdam to Dublin eventually delayed? LSTM and transfer learning for predicting delays of low cost airlines. Proc AAAI Conf Artif Intell 33:9541–9546

    Google Scholar 

  • Metz C (2016) An infusion of AI makes Google translate more powerful than ever. https://www.wired.com/2016/09/google-claims-ai-breakthrough-machine-translation/. Accessed 15 Nov 2019

  • Naz S, Umar AI, Ahmad R, Ahmed SB, Shirazi SH, Siddiqi I, Razzak MI (2016) Offline cursive Urdu-Nastaliq script recognition using multidimensional recurrent neural networks. Neurocomputing 177:228–241

    Google Scholar 

  • Naz S, Umar AI, Ahmad R, Siddiqi I, Ahmed SB, Razzak MI, Shafait F (2017) Urdu Nastaliq recognition using convolutional-recursive deep learning. Neurocomputing 243:80–87

    Google Scholar 

  • Nguyen DC, Bailly G, Elisei F (2017) Learning off-line vs. on-line models of interactive multimodal behaviors with recurrent neural networks. Pattern Recognit Lett 100:29–36

    Google Scholar 

  • Núñez JC, Cabido R, Pantrigo JJ, Montemayor AS, Vélez JF (2018) Convolutional neural networks and long short-term memory for skeleton-based human activity and hand gesture recognition. Pattern Recognit 76:80–94

    Google Scholar 

  • Núñez JC, Cabido R, Vélez JF, Montemayor AS, Pantrigo JJ (2019) Multiview 3d human pose estimation using improved least-squares and LSTM networks. Neurocomputing 323:335–343

    Google Scholar 

  • Pei M, Wu X, Guo Y, Fujita H (2017) Small bowel motility assessment based on fully convolutional networks and long short-term memory. Knowl Based Syst 121:163–172

    Google Scholar 

  • Pei Z, Qi X, Zhang Y, Ma M, Yang YH (2019) Human trajectory prediction in crowded scene using social-affinity long short-term memory. Pattern Recognit 93:273–282

    Google Scholar 

  • Perrett T, Damen D (2019) DDLSTM: dual-domain LSTM for cross-dataset action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7852–7861

  • Pino JM, Sidorov A, Ayan NF (2017) Transitioning entirely to neural machine translation. https://engineering.fb.com/ml-applications/transitioning-entirely-to-neural-machine-translation/. Accessed 15 Nov 2019

  • Plank B, Søgaard A, Goldberg Y (2016) Multilingual part-of-speech tagging with bidirectional long short-term memory models and auxiliary loss. In: Proceedings of the 54th annual meeting of the association for computational linguistics, vol 2: short papers, pp 412–418

  • Portegys TE (2010) A maze learning comparison of Elman, long short-term memory, and Mona neural networks. Neural Netw 23(2):306–313

    Google Scholar 

  • Rabiner LR (1986) An introduction to hidden Markov models. IEEE ASSP Mag 3(1):4–16

    Google Scholar 

  • Ren J, Hu Y, Tai YW, Wang C, Xu L, Sun W, Yan Q (2016) Look, listen and learn—a multimodal LSTM for speaker identification. In: 30th AAAI conference on artificial intelligence

  • Ringeval F, Eyben F, Kroupi E, Yuce A, Thiran JP, Ebrahimi T, Lalanne D, Schuller B (2015) Prediction of asynchronous dimensional emotion ratings from audiovisual and physiological data. Pattern Recognit Lett 66:22–30 Pattern Recognition in Human Computer Interaction

    Google Scholar 

  • Rodrigues F, Markou I, Pereira FC (2019) Combining time-series and textual data for taxi demand prediction in event areas: a deep learning approach. Inf Fusion 49:120–129

    Google Scholar 

  • Ryu S, Kim S, Choi J, Yu H, Lee GG (2017) Neural sentence embedding using only in-domain sentences for out-of-domain sentence detection in dialog systems. Pattern Recognit Lett 88:26–32

    Google Scholar 

  • Sabina Aouf R (2019) Openai creates dactyl robot hand with “unprecedented” dexterity . https://www.dezeen.com/2018/08/07/openai-musk-dactyl-robot-hand-unprecedented-dexterity-technology/. Accessed 17 Nov 2019

  • Sachan DS, Zaheer M, Salakhutdinov R (2019) Revisiting LSTM networks for semi-supervised text classification via mixed objective function. Proc AAAI Conf Artif Intell 33:6940–6948

    Google Scholar 

  • Saeed HA, jun Peng M, Wang H, wen Zhang B (2020) Novel fault diagnosis scheme utilizing deep learning networks. Prog in Nuclear Energy 118:103066

    Google Scholar 

  • Sagheer A, Kotb M (2019) Time series forecasting of petroleum production using deep LSTM recurrent networks. Neurocomputing 323:203–213

    Google Scholar 

  • Sak H, Senior A, Rao K, Beaufays F, Schalkwyk J (2015) Google voice search: faster and more accurate. https://ai.googleblog.com/2015/09/google-voice-search-faster-and-more.html. Accessed 15 Nov 2019

  • Sang C, Pierro MD (2019) Improving trading technical analysis with tensorflow long short-term memory (LSTM) neural network. J Finance Data Sci 5(1):1–11

    Google Scholar 

  • Schmid H (1994) Part-of-speech tagging with neural networks. In: Proceedings of the 15th conference on Computational linguistics, vol 1, pp 172–176. Association for Computational Linguistics

  • Schmidhuber J, Wierstra D, Gagliolo M, Gomez F (2007) Training recurrent networks by Evolino. Neural Comput 19(3):757–779

    MATH  Google Scholar 

  • Schuster M, Paliwal KK (1997) Bidirectional recurrent neural networks. IEEE Trans Signal Process 45(11):2673–2681. https://doi.org/10.1109/78.650093

    Article  Google Scholar 

  • Si C, Chen W, Wang W, Wang L, Tan T (2019) An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1227–1236

  • Song L, Zhang Y, Wang Z, Gildea D (2018) N-ary relation extraction using graph-state LSTM. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 2226–2235

  • Song M, Park H, shik Shin K (2019) Attention-based long short-term memory network using sentiment lexicon embedding for aspect-level sentiment analysis in Korean. Inf Process Manag 56(3):637–653

    Google Scholar 

  • Steenkiste TV, Ruyssinck J, Baets LD, Decruyenaere J, Turck FD, Ongenae F, Dhaene T (2019) Accurate prediction of blood culture outcome in the intensive care unit using long short-term memory neural networks. Artif Intell Med 97:38–43

    Google Scholar 

  • Stollenga MF, Byeon W, Liwicki M, Schmidhuber J (2015) Parallel multi-dimensional LSTM, with application to fast biomedical volumetric image segmentation. In: Advances in neural information processing systems, pp 2998–3006

  • Su Y, Kuo CCJ (2019) On extended long short-term memory and dependent bidirectional recurrent neural network. Neurocomputing 356:151–161

    Google Scholar 

  • Sukhbaatar S, Weston J, Fergus R et al (2015) End-to-end memory networks. In: Advances in neural information processing systems, pp 2440–2448

  • Sun Y, Ji Z, Lin L, Tang D, Wang X (2017) Entity disambiguation with decomposable neural networks. Wiley Interdiscip Rev Data Min Knowl Discov 7(5):e1215

    Google Scholar 

  • Sun Y, Ji Z, Lin L, Wang X, Tang D (2018) Entity disambiguation with memory network. Neurocomputing 275:2367–2373

    Google Scholar 

  • Sun X, Zhang C, Li L (2019) Dynamic emotion modelling and anomaly detection in conversation based on emotional transition tensor. Inf Fusion 46:11–22

    Google Scholar 

  • Sutskever I, Vinyals O, Le Q (2014) Sequence to sequence learning with neural networks. In: Advances in neural information processing systems

  • Takayama J, Nomoto E, Arase Y (2019) Dialogue breakdown detection robust to variations in annotators and dialogue systems. Comput Speech Lang 54:31–43

    Google Scholar 

  • The AlphaStar Team (2019a) Alphastar: Grandmaster level in starcraft ii using multi-agent reinforcement learning. https://deepmind.com/blog/article/AlphaStar-Grandmaster-level-in-StarCraft-II-using-multi-agent-reinforcement-learning. Accessed 15 Nov 2019

  • The AlphaStar Team (2019b) Alphastar: Mastering the real-time strategy game starcraft ii. https://deepmind.com/blog/article/alphastar-mastering-real-time-strategy-game-starcraft-ii. Accessed 15 Nov 2019

  • Toledo JI, Carbonell M, Fornés A, Lladós J (2019) Information extraction from historical handwritten document images with a context-aware neural model. Pattern Recognit 86:27–36

    Google Scholar 

  • Turan M, Almalioglu Y, Araujo H, Konukoglu E, Sitti M (2018) Deep endovo: a recurrent convolutional neural network (RCNN) based visual odometry approach for endoscopic capsule robots. Neurocomputing 275:1861–1870

    Google Scholar 

  • Uddin MZ (2019) A wearable sensor-based activity prediction system to facilitate edge computing in smart healthcare system. J Parallel Distrib Comput 123:46–53

    Google Scholar 

  • Van Phan T, Nakagawa M (2016) Combination of global and local contexts for text/non-text classification in heterogeneous online handwritten documents. Pattern Recognit 51:112–124

    Google Scholar 

  • Venugopalan S, Hendricks LA, Mooney R, Saenko K (2016) Improving LSTM-based video description with linguistic knowledge mined from text. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 1961–1966

  • Vogels W (2016) Bringing the Magic of Amazon AI and Alexa to Apps on AWS. https://www.allthingsdistributed.com/2016/11/amazon-ai-and-alexa-for-all-aws-apps.html. Accessed 15 Nov 2019

  • Wang L, Cao Z, Xia Y, De Melo G (2016) Morphological segmentation with window lstm neural networks. In: 30th AAAI conference on artificial intelligence

  • Wang Y, Long M, Wang J, Gao Z, Philip SY (2017) PREDRNN: recurrent neural networks for predictive learning using spatiotemporal LSTMs. In: Advances in neural information processing systems, pp 879–888

  • Wang Q, Du P, Yang J, Wang G, Lei J, Hou C (2019a) Transferred deep learning based waveform recognition for cognitive passive radar. Signal Process 155:259–267

    Google Scholar 

  • Wang W, Hong T, Xu X, Chen J, Liu Z, Xu N (2019b) Forecasting district-scale energy dynamics through integrating building network and long short-term memory learning algorithm. Appl Energy 248:217–230

    Google Scholar 

  • Wang Z, Wang Z, Long Y, Wang J, Xu Z, Wang B (2019c) Enhancing generative conversational service agents with dialog history and external knowledge. Comput Speech Lang 54:71–85

    Google Scholar 

  • Wen J, Tu H, Cheng X, Xie R, Yin W (2019) Joint modeling of users, questions and answers for answer selection in CQA. Expert Syst Appl 118:563–572

    Google Scholar 

  • Werbos PJ et al (1990) Backpropagation through time: what it does and how to do it. Proc IEEE 78(10):1550–1560

    Google Scholar 

  • Wierstra D, Gomez FJ, Schmidhuber J (2005) Modeling systems with internal state using Evolino. In: Proceedings of the 7th annual conference on Genetic and evolutionary computation. ACM, pp 1795–1802

  • Wöllmer M, Schuller B (2014) Probabilistic speech feature extraction with context-sensitive bottleneck neural networks. Neurocomputing 132:113–120 Innovations in Nature Inspired Optimization and Learning Methods Machines learning for Non-Linear Processing

    Google Scholar 

  • Wu Y, Schuster M, Chen Z, Le QV, Norouzi M, Macherey W, Krikun M, Cao Y, Gao Q, Macherey K et al (2016) Google’s neural machine translation system: bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144

  • Wu YX, Wu QB, Zhu JQ (2019) Improved EEMD-based crude oil price forecasting using LSTM networks. Phys A Stat Mech Appl 516:114–124

    Google Scholar 

  • Xingjian S, Chen Z, Wang H, Yeung DY, Wong WK, Woo Wc (2015) Convolutional LSTM network: a machine learning approach for precipitation nowcasting. In: Advances in neural information processing systems, pp 802–810

  • Yan H, Ouyang H (2017) Financial time series prediction based on deep learning. Wirel Pers Commun 102:1–18

    Google Scholar 

  • Yang M, Tu W, Wang J, Xu F, Chen X (2017) Attention based LSTM for target dependent sentiment classification. In: 31st AAAI conference on artificial intelligence

  • Yang J, Guo Y, Zhao W (2019) Long short-term memory neural network based fault detection and isolation for electro-mechanical actuators. Neurocomputing 360:85–96

    Google Scholar 

  • Yi HC, You ZH, Zhou X, Cheng L, Li X, Jiang TH, Chen ZH (2019) ACP-DL: a deep learning long short-term memory model to predict anticancer peptides using high-efficiency feature representation. Mol Ther Nucl Acids 17:1–9

    Google Scholar 

  • Yousfi S, Berrani SA, Garcia C (2017) Contribution of recurrent connectionist language models in improving LSTM-based Arabic text recognition in videos. Pattern Recognit 64:245–254

    Google Scholar 

  • Yu Y, Si X, Hu C, Zhang J (2019) A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput 31(7):1235–1270

    MathSciNet  MATH  Google Scholar 

  • Zamora-Martínez F, Frinken V, España-Boquera S, Castro-Bleda M, Fischer A, Bunke H (2014) Neural network language models for off-line handwriting recognition. Pattern Recognit 47(4):1642–1652

    Google Scholar 

  • Zhang L, Zhu G, Mei L, Shen P, Shah SAA, Bennamoun M (2018) Attention in convolutional LSTM for gesture recognition. In: Advances in neural information processing systems, pp 1953–1962

  • Zhang M, Wang Q, Fu G (2019a) End-to-end neural opinion extraction with a transition-based model. Inf Syst 80:56–63

    Google Scholar 

  • Zhang P, Ouyang W, Zhang P, Xue J, Zheng N (2019b) SR-LSTM: state refinement for LSTM towards pedestrian trajectory prediction. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 12085–12094

  • Zhang W, Han J, Deng S (2019c) Abnormal heart sound detection using temporal quasi-periodic features and long short-term memory without segmentation. Biomed Signal Process Control 53:101560

    Google Scholar 

  • Zhang W, Li Y, Wang S (2019d) Learning document representation via topic-enhanced LSTM model. Knowl Based Syst 174:194–204

    Google Scholar 

  • Zhang Z, Li H, Zhang L, Zheng T, Zhang T, Hao X, Chen X, Chen M, Xiao F, Zhou W (2019e) Hierarchical reinforcement learning for multi-agent moba game. arXiv preprint arXiv:1901.08004

  • Zhao Z, Song Y, Su F (2016) Specific video identification via joint learning of latent semantic concept, scene and temporal structure. Neurocomputing 208:378–386 SI: BridgingSemantic

    Google Scholar 

  • Zhao J, Deng F, Cai Y, Chen J (2019a) Long short-term memory-fully connected (LSTM-FC) neural network for PM2. 5 concentration prediction. Chemosphere 220:486–492

    Google Scholar 

  • Zhao J, Mao X, Chen L (2019b) Speech emotion recognition using deep 1d & 2d CNN LSTM networks. Biomed Signal Process Control 47:312–323

    Google Scholar 

  • Zhou X, Wan X, Xiao J (2016) Attention-based LSTM network for cross-lingual sentiment classification. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 247–256

  • Zhu W, Lan C, Xing J, Zeng W, Li Y, Shen L, Xie X (2016) Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks. In: 30th AAAI conference on artificial intelligence

  • Zuo Y, Wu Y, Min G, Cui L (2019) Learning-based network path planning for traffic engineering. Future Gener Comput Syst 92:59–67

    Google Scholar 

Download references

Acknowledgements

We thank the reviewers for their very thoughtful and thorough reviews of our manuscript. Their input has been invaluable in increasing the quality of our paper. Also, a special thanks to prof. Jürgen Schmidhuber for taking the time to share his thoughts on the manuscript with us and making suggestions for further improvements.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Greg Van Houdt.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Van Houdt, G., Mosquera, C. & Nápoles, G. A review on the long short-term memory model. Artif Intell Rev 53, 5929–5955 (2020). https://doi.org/10.1007/s10462-020-09838-1

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10462-020-09838-1

Keywords

Navigation