Abstract
In order to realize the historical sequence establishment of abstract dynamics in the cooperative translation of Korean language under machine learning and generate an abstract representation dynamically of the translation decoding tree in the recursive model during translation decoding. Combining the advantages of the two kinds of neural networks, this paper constructs a recursive recurrent neural network model, which can not only model the translation process by using the traditional machine translation features but also gradually construct the abstract representation of translation candidates in the process of translation, mining the important language model and other global features in machine translation effectively. This paper has trained the number of Korean-Chinese translation vocabulary, sentence length, and language pairs. Based on the test results, the model can effectively improve the performance of the machine translation model. In addition, based on the adjustment of pre-order word order to optimized the recursive recurring neural network model, and improved the performance of machine translation significantly.
Similar content being viewed by others
References
Xu J, Sun X (2016) Dependency-based gated recursive neural network for Chinese word Segmentation, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pp. 567–572
Ling W, Trancoso I, Dyer C, Black AW (2015) Character-based neural machine translation. arXiv:1–11
Costa-Jussà MR, Fonollosa J A (2016) Character-based neural machine translation, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. vol. 36, No. 2, pp.357–361
Sennrich R, Haddow B, Birch A. Neural machine translation of rare words with subword units, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics
Junyoung C, Kyunghyun C, Yoshua B (2016) A character-level decoder without explicit segmentation for neural machine translation. arXiv:96–102
Shen S, Cheng Y, He Z, He W, Wu H, Sun M, Liu Y (2016) Minimum risk training for neural machine translation, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pp.1683–1692
Bahdanau D, Cho K, Bengio Y (2016) Neural machine translation by jointly learning to align and translate. Comput Sci:1–15
Mikolov T, Karafidt M, Burget L (2010) Recurrent neural network based language model. Computer Sci:1045–1048
Krings HP (2001) In: Koby GS (ed) Repairing texts: empirical investigations of machine translation post-editing processes”. Kent State University Press
Liu L, Watanabe T, Sumita E (2013) Additive neural networks for statistical machine translation. Int Conf Parallel Distrib Syst 791-801
Kalchbrenner N, Blunsom P (2013) Recurrent continuous translation models, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. pp. 1700-1709
Devlin J, Zbib R, Huang Z (2014) Fast and robust neural network joint models for statistical machine translation. Assoc Comput Linguist 1370-1380
Li W, Yang YT, Xiao L, Mi CG, Dong R (2019) Research on work language model for the Chinese-to-Uyghur machine translation. J Xiamen Univ (Natural Science):189–194
Wuebker J, Mauser A, Ney H (2010) Training phrase translation models with leaving-one-out, Proceedings of Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. pp. 475–484
Luong MT, Manning CD (2016) Achieving open vocabulary neural machine translation with hybrid word-character models, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. pp. 1054–1063
Mikolov T, Sutskever I, Chen K. Distributed representations of words and phrases and their compositionality, Advances in Neural Information Processing Systems. pp. 3111-3119. 2013.
Passban P, Liu Q, Way A (2017) Providing morphological information for statistical machine translation using Neural Networks, Nephron Clinical Practice. pp. 271–282
Zhang JJ, Zong CQ (2017) Application of neural language model in statistical machine translation. Technol Intell Eng:21–28
Xiao KZ (2017) Research on the translation of out of vocabulary words in the neural machine translation for Chinese and English patent corpus. Beijing Jiaotong University, Beijing
Li X, Zhang J, Zong C (2016) Towards zero unknown word in neural machine translation, International Joint Conference on Artificial Intelligence. pp. 2852–2858
Visweswariah K, Rajlcumar R. Gandhe A, et al (2011) A word reordering model for improved machine translation. Proceedings of Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pp. 486–496
Renshaw D, Hall KB (2015) Longshort-term memory language models with additive morphological features for Automatic speech recognition, International Conference on Acoustics, Speech and Signal Processing (ICASSP). Brisbane: IEEE. pp. 5246–5250
Tran P, Dinh D, Nguyen HT (2016) A character level based and word level based approach for Chinese-Vietnamese machine translation. Comput Intell Neurosci:1–11
Goto I, Utiyama M, Sumita E (2012) Post-ordering by parsing for Japanese-English statistical-machine translation, Proceedings of Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume. pp. 311–316
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Zhu, L. Exploration on Korean-Chinese collaborative translation method based on recursive recurrent neural network. Pers Ubiquit Comput 24, 309–318 (2020). https://doi.org/10.1007/s00779-019-01347-5
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00779-019-01347-5