Skip to main content
Log in

Application of data storage and information search in english translation corpus

  • Published:
Wireless Networks Aims and scope Submit manuscript

Abstract

A corpus refers to a text database that scientifically organizes and stores electronic texts, and is an important material for linguistic research. At the same time, through the analysis of a large amount of corpus, empirical rules can also be obtained from it. The main research purpose of this article is to build a translation corpus by transforming Chinese poetry and English poetry into each other. The design of the storage structure of the corpus is one of the most basic tasks, and the core key function of the corpus is how to improve the speed and efficiency of the retrieval method is the focus of the research. This paper studies the storage of the data structure. Redesign the storage structure of the database with MySQL technology, including the concept, logic and structure of the corpus. Regarding the conceptual design of the corpus, an ER diagram can be used to represent the mutual relevance. The design of the logical structure is mainly designed in accordance with the MySQL method. For completeness, refer to the explanation about primary keys and foreign keys. In the real operation practice, we need to use the storage structure and access to maximize the efficiency of the database according to the actual situation. The English translation corpus studied in this subject has achieved a more advanced structure in data storage technology, and at the same time, the search algorithm has been optimized in an all-round way, which can achieve higher-precision corpus retrieval. This achievement can play a greater role in English translation and English teaching in the future.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Panchal, P., Merchant, S. and Patel N. (2012). Scene detection and retrieval of video using motion vector and occurrence rate of shot boundaries. In 2012 Nirma University international conference on engineering (NUiCONE), 65(9), 1–6.

  2. Papadopoulos, D. P., et al. (2013). Automatic summarization and annotation of videos with lack of metadata information. Expert Systems with Applications, 40(14), 5765–5778.

    Article  Google Scholar 

  3. Patel, B. V., & Meshram, B. B. (2012). Content based video retrieval systems. International Journal UbiComp, 3(2), 13–30.

    Article  Google Scholar 

  4. Perea-Ortega, J. M., et al. (2013). Semantic tagging of video ASR transcripts using the web as a source of knowledge. Computer Standards and Interfaces, 35(5), 519–528.

    Article  Google Scholar 

  5. Rafferty, J., et al. (2014). Automatic summarization of activities depicted in instructional videos by use of speech analysis. In L. Pecchia, et al. (Eds.), Ambient assisted living and daily activities. Lecture notes in computer science (Vol. 35, pp. 123–130). New York: Springer.

    Chapter  Google Scholar 

  6. Rafferty, J. et al. (2014). NFC based provisioning of instructional videos to assist with instrumental activities of daily living. In: 2014 36th annual international conference of the IEEE engineering in medicine and biology society, EMBC, vol 56(8) (pp 4131–4134).

  7. Rafferty, J., Chen, L., et al. (2015). Goal lifecycles and ontological models for intention based assistive living within smart environments. Computer Systems Science and Engineering, 30(1), 7–18.

    Google Scholar 

  8. Rafferty, J., Nugent, C., et al. (2015). Automatic metadata generation through analysis of narration within instructional videos. Journal of Medical Systems, 39(9), 1–7.

    Article  Google Scholar 

  9. Shabani, A. H., Zelek, J. S., & Clausi, D. A. (2013). Multiple scale-specific representations for improved human action recognition. Pattern Recognition Letters, 34(15), 1771–1779.

    Article  Google Scholar 

  10. Yang, H., & Meinel, C. (2014). Content based lecture video retrieval using speech and video text information. IEEE Transactions on Learning Technologies, 7(2), 142–154.

    Article  Google Scholar 

  11. Ababneh, J. I., & Bataineh, M. H. (2008). Linear phase FIR filter design using p swarm optimization and genetic algorithms. Digital Signal Processing, 18(4), 657–668.

    Article  Google Scholar 

  12. Aziz, M. A. E., Ewees, A. A., & Hassanien, A. E. (2018). Multi-objective whale optimization algorithm for content-based image retrieval. Multimedia Tools and Applications, 77(4), 26135–26172.

    Article  Google Scholar 

  13. Aziz, M. A. E., & Hassanien, A. E. (2018). Modified cuckoo search algorithm with rough sets for feature selection. Neural Computing and Applications, 29(4), 925–934.

    Article  Google Scholar 

  14. Boqing, G., Wang, Y., Liu, J., & Tang, X. (2009). Automatic facial expression recognition on a single 3D face by exploring shape deformation. Proceedings of the 17 th International Conference on Mutimedia, 58(6), 569–572.

    Google Scholar 

  15. Buciu, I., Kotropoulos, C., & Pitas, I. (2003). ICA and gabor representation for facial expression recognition. Proceedings international conference on image processing, 89(5), 855–858.

    MATH  Google Scholar 

  16. Chang, H. T. Y. (2017). Facial expression recognition using a combination of multiple facial features and support vector machine. Soft Computing, 22(2), 4389–4405.

    Google Scholar 

  17. Chu, S. C., Tsai, P. W., & Pan, J. S. (2006). Cat swarm optimization. LNAI, 3(1), 854–858.

    Google Scholar 

  18. Cossetin, M. J., Nievola, J. C., & Koerich, A. L. (2016). Facial expression recognition using a pairwise feature selection and classification approach neural networks (IJCNN). 2016 International Joint Conference on IEEE, 63(1), 5149–5155.

    Google Scholar 

  19. Ewees, A. A., Aziz, M. A. E. L., & Hassanien, A. E. (2019). Chaotic multi-verse optimizer-based feature selection. Neural Computing and Applications, 31(4), 991–1006.

    Article  Google Scholar 

  20. Fan, X., & Tjahjad, T. (2017). A dynamic framework based on local zernike moment and motion history image for facial expression recognition. Pattern Recognition, 64(9), 399–406.

    Article  Google Scholar 

  21. Fuentes, C., Herskovic, V., Rodríguez, I., et al. (2017). A systematic literature review about technologies for self-reporting emotional information. Journal of Ambient Intelligence and Humanized Computing, 8(3), 593–606.

    Article  Google Scholar 

  22. Gross, R., Matthews, I., Cohn, J., Kanade, T., Baker, S. (2008). Multi-PIE. In: 8th IEEE International Conference on automatic face & gesture recognition, vol 46(2) (pp 1–8). Amsterdam

  23. Happy, S. L., Member, S., & Routray, A. (2015). Automatic facial expression recognition using features of salient facial patches. IEEE Transactions on Affective Computing, 6(4), 1–12.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fenghua Zhang.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, F. Application of data storage and information search in english translation corpus. Wireless Netw (2021). https://doi.org/10.1007/s11276-021-02690-3

Download citation

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s11276-021-02690-3

Keywords

Navigation