Skip to main content
Log in

A comprehensive survey on video frame interpolation techniques

  • Survey
  • Published:
The Visual Computer Aims and scope Submit manuscript

Abstract

Video frame interpolation is an important area in the computer vision research activities for video post-processing, surveillance, and video restoration tasks. It aims toward increasing the frame rate of a video sequence by calculating intermittent frames between consecutive input frames. This ensures extra smooth, clear motion in order to make animation fluid enough and reduce display motion blur. Advanced deep learning algorithms have the potential to discover knowledge from large-scale diverse video data. These algorithms gain insights about intermediate motion and provide new opportunities to further improve video interpolation technologies. This survey demonstrates a comprehensive overview of about a good number of contributions over past decade pertinent to the latest developments in this domain. The survey paper highlights common challenges in the area of video frame interpolation based on three key aspects: high visual quality, low complexity, and high efficiency of interpolated output from regular videos with the standard frame rate. We scrutinize the architectures, workflows, performance, advantages, and disadvantages and generate a broad categorization along with an overview of experimental results of various state-of-the-art methods executed on benchmark datasets. This survey discusses applications of diverse interpolation frameworks. It provides a backbone reference that inspires future researchers to optimize current techniques on academic and industrial grounds.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12

Similar content being viewed by others

Notes

  1. https://media.xiph.org/video/derf

  2. https://media.xiph.org/video/derf

References

  1. Liu, Z., Yeh, R.A., Tang, X., Liu, Y., Agarwala, A.: Video frame synthesis using deep voxel flow. In: 2017 IEEE International Conference on Computer Vision (ICCV) pp. 4473–4481 (2017)

  2. Long,G., Kneip, L., Alvarez, J.M., Li, H., Zhang, X., Yu, Q.: Learning image matching by simply watching video. In: ECCV (2016)

  3. Madhawa, V., Sudasingha, I., Vidanapathirana, J., Kanchana, P., Perera, I.: Tracking and frame-rate enhancement for real-time 2D human pose estimation. Vis. Comput 36, 1501–1519 (2019)

    Google Scholar 

  4. Jiang, H., Sun, D., Jampani, V., Yang, M.H., Learned-Miller, E., Kautz, J.: Super slomo: high quality estimation of multiple intermediate frames for video interpolation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 9000–9008 (2018)

  5. Niklaus, S.,Mai, L., Liu, F.: Video frame interpolation via adaptive convolution. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 2270–2279 (2017)

  6. Liu, Y., Liao, Y.T., Lin, Y.Y., Chuang, Y.Y.: Deep video frame interpolation using cyclic frame generation. In: AAAI (2019)

  7. Xue, T., Chen, B., Wu, J., Wei, D., Freeman, W.: Video enhancement with task-oriented flow. Int. J. Comput. Vis. 127, 1106–1125 (2018)

    Google Scholar 

  8. Bao,W., Lai, W.S., Ma, C., Zhang, X., Gao, Z., Yang, M.H.: Depthaware video frame interpolation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 3698–3707 (2019)

  9. Yanke, W., Zhong, F., Peng, Q., Qin, X.: Depth map enhancement based on color and depth consistency. Vis. Comput 30, 1157–1168 (2013)

    Google Scholar 

  10. Niklaus, S., Liu, F.: Context-aware synthesis for video frame interpolation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 1701–1710 (2018)

  11. Fourure, D., Emonet, R., Fromont, É,Muselet, D., Trémeau, A., Wolf, C.: Residual conv-deconv grid network for semantic segmentation. arXiv:1707.07958 (2017)

  12. Meyer, S., Djelouah, A., McWilliams, B., Sorkine-Hornung, A., Gross, M., Schroers, C.: Phasenet for video frame interpolation. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 498–507 (2018)

  13. Niklaus, S., Mai, L., Liu, F.: Video frame interpolation via adaptive separable convolution. In: 2017 IEEE International Conference on Computer Vision (ICCV) pp. 261–270 (2017)

  14. van Amersfoort, J.R., Shi, W., Acosta, A., Massa, F., Totz, J., Wang, Z., Caballero, J.: Frame interpolation with multi-scale deep loss functions and generative adversarial networks. arXiv:1711.06045 (2017)

  15. Peleg, T., Szekely, P., Sabo, D., Sendik, O.: Im-net for high resolution video frame interpolation. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 2393–2402 (2019)

  16. Jia, Y., Shelhamer, E., Donahue, J., Karayev, S., Long, J.,Girshick, R.B., Guadarrama, S., Darrell, T.: Caffe: convolutional architecture for fast feature embedding. arXiv:1408.5093 (2014)

  17. Ahn, H.E., Jeong, J., Kim, J.W.: A fast 4k video frame interpolation using a hybrid task-based convolutional neural network. Symmetry 11, 619 (2019)

    Google Scholar 

  18. Soomro, K., Zamir, A., Shah,M.: Ucf101: A dataset of 101 human actions classes from videos in the wild. arXiv:1212.0402 (2012)

  19. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. Int. J. Comput. Vis. 92, 1–31 (2007)

    Google Scholar 

  20. Su, S., Delbracio, M., Wang, J., Sapiro, G., Heidrich, W., Wang, O.: Deep video deblurring for hand-held cameras. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 237–246 (2017)

  21. Geiger, A., Lenz, P., Urtasun, R.: Are we ready for autonomous driving? the kitti vision benchmark suite. In: 2012 IEEE Conference on Computer Vision and Pattern Recognition pp. 3354–3361 (2012)

  22. Perazzi, F., Pont-Tuset, J., McWilliams, B., Van Gool, L., Gross, M., Sorkine-Hornung, A.: a benchmark dataset and evaluation methodology for video object segmentation. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 724–732 (2016)

  23. Gu, D., Wen, Z., Cui, W., Wang, R., Jiang, F., Liu, S.: Continuous bidirectional optical flow for video frame sequence interpolation. In: 2019 IEEE International Conference on Multimedia and Expo (ICME) pp. 1768–1773 (2019)

  24. Bao, W., Lai, W.S., Zhang, X., Gao, Z., Yang, M.H.: Memc-net: motion estimation and motion compensation driven neural network for video interpolation and enhancement. IEEE Trans. Pattern Anal. Mach. Intell. (2019). https://doi.org/10.1109/TPAMI.2019.2941941

    Article  Google Scholar 

  25. Cheng, X., Chen, Z.: A multi-scale position feature transform network for video frame interpolation. IEEE Trans. Circuits Syst. Video Technol. (2019). https://doi.org/10.1109/TCSVT.2019.2939143

    Article  Google Scholar 

  26. Mathieu, M., Couprie, C., LeCun, Y.: Deep multi-scale video prediction beyond mean square error. CoRR arXiv:1511.05440 (2016)

  27. Zhang, H., Wang, R., Zhao, Y.: Multi-frame pyramid refinement network for video frame interpolation. IEEE Access 7, 130610–130621 (2019)

    Google Scholar 

  28. Ahn, H.E., Jeong, J., Kim, J., Chul Kwon, S., Yoo, J.S.: A fast 4k video frame interpolation using a multi-scale optical flow reconstruction network. Symmetry 11, 1251 (2019)

    Google Scholar 

  29. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: CACM(2017)

  30. Dosovitskiy, A., Fischer, P., Ilg, E., Häusser, P., Hazirbas, C., Golkov, V., Smagt, P.V.D., Cremers, D., Brox, T.: Flownet: Learning optical flow with convolutional networks. In: 2015 IEEE International Conference on Computer Vision (ICCV) pp. 2758–2766 (2015)

  31. Xue, T., Wu, J., Bouman, K., Freeman, B.: Visual dynamics: probabilistic future frame synthesis via cross convolutional networks. arXiv:1607.02586 (2016)

  32. Choi, C., Lee, H., Yi, J.: An interpolation method for strong barrel lens distortion. Vis. Comput 34, 1479–1491 (2017)

    Google Scholar 

  33. Lee, H., Kim, T., Chung, T. Y., Pak, D., Ban, Y., Lee, S.: AdaCoF: adaptive collaboration of flows for video frame interpolation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 5315–5324 (2020)

  34. Yu, Z., Li, H., Wang, Z., Hu, Z., Chen, C.: Multi-level video frame interpolation: exploiting the interaction among different levels. IEEE Trans. Circuits Syst. Video Technol. 23, 1235–1248 (2013)

    Google Scholar 

  35. Werlberger, M., Pock, T., Unger, M., Bischof, H.: Optical flow guided tv-l1 video interpolation and restoration. In: EMMCVPR (2011)

  36. Barron, J.L., Fleet, D.J., Beauchemin, S.: Performance of optical flow techniques. Int. J. Comput. Vis. 12, 43–77 (1994)

    Google Scholar 

  37. Cheng, X., Chen, Z.: Video frame interpolation via deformable separable convolution. In: AAAI (2020)

  38. Cheng, X., Chen, Z.: Multiple video frame interpolation via enhanced deformable separable convolution. arXiv:2006.08070 (2020)

  39. Zolfaghari, M., Ghanei-Yakhdan, H., Yazdi, M.: Real-time object tracking based on an adaptive transition model and extended kalman filter to handle full occlusion. Vis. Comput 36, 701–715 (2019)

    Google Scholar 

  40. Mahajan, D., Huang, F., Matusik, W., Ramamoorthi, R., Belhumeur, P.: Moving gradients: a path-based method for plausible image interpolation. In: SIGGRAPH 2009 (2009)

  41. Álvarez, L., Deriche, R., Papadopoulo, T., Pérez, J.S.: Symmetrical dense optical flowestimation with occlusions detection. In: ECCV (2002)

  42. Xiao, J., Cheng, H., Sawhney, H., Rao, C., Isnardi, M.A.: Bilateral filtering-based optical flow estimation with occlusion detection. In: ECCV (2006)

  43. Zitnick, C.L., Jojic, N., Kang, S.B.: Consistent segmentation for optical flow estimation. In: Tenth IEEE International Conference on Computer Vision (ICCV’05) Volume 1, Vol. 2, 1308–1315 (2005)

  44. Brox, T., Bruhn, A., Papenberg, N., Weickert, J.: High accuracy optical flow estimation based on a theory for warping. In: ECCV (2004)

  45. Yan, B., Chen, Y.: Low complexity image interpolation method based on path selection. J. Vis. Commun. Image Represent. 24, 661–668 (2013)

    Google Scholar 

  46. Horn, B., Schunck, B.: Determining optical flow artificial intelligence 17 (1981)

  47. Lucas, B.D., Kanade, T.: An iterative image registration technique with an application to stereo vision. In: IJCAI (1981)

  48. Bouguet, J.Y.: Pyramidal implementation of the lucas kanade feature tracker. Open Source Computer Vision Library (2003)

  49. Fan, Y., Yoda, N., Igarashi, T., Ma, H.: Path-based image sequence interpolation guided by feature points. In: 2016 IEEE International Conference on Image Processing (ICIP) pp. 569–573 (2016)

  50. Zhou, M., Liang, L., Sun, J., Wang, Y.: AAM based face tracking with temporal matching and face segmentation. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition pp. 701–708 (2010)

  51. Boykov, Y., Veksler, O., Zabih, R.: Fast approximate energy minimization via graph cuts. IEEE Trans. Pattern Anal. Mach. Intell. 23, 1222–1239 (2001)

    Google Scholar 

  52. Sun, D., Yang, X., Liu, M.Y., Kautz, J.: Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 8934–8943 (2018)

  53. Ranjan, A., Black, M.J.: Optical flow estimation using a spatial pyramid network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 2720–2729 (2017)

  54. Xu, L., Jia, J., Matsushita, Y.: Motion detail preserving optical flow estimation. IEEE Trans. Pattern Anal. Mach. Intell. 34, 1744 (2012)

    Google Scholar 

  55. Steinbrücker, F., Pock, T., Cremers, D.: Large displacement optical flow computation without warping. In: 2009 IEEE 12th International Conference on Computer Vision pp. 1609–1614 (2009)

  56. Baker, S., Scharstein, D., Lewis, J., Roth, S., Black, M.J., Szeliski, R.: A database and evaluation methodology for optical flow. Int. J. Comput. Vis. 92(1), 1–31 (2011)

    Google Scholar 

  57. Herbst, E., Seitz, S., Baker, S.: Occlusion reasoning for temporal interpolation using optical flow. Department of Computer Science and Engineering, University of Washington, Tech. Rep. UW-CSE-09-08-01 (2009)

  58. Rakêt, L., Roholm, L., Bruhn, A., Weickert, J.: Motion compensated frame interpolation with a symmetric optical flow constraint. In: ISVC (2012)

  59. Kaviani, H.R., Shirani, S.: Frame rate upconversion using optical flow and patch-based reconstruction. IEEE Trans. Circuits Syst. Video Technol. 26, 1581–1594 (2016)

    Google Scholar 

  60. Cosmin, A., Haber, T., Mertens, T., Bekaert, P.: Video enhancement using reference photographs. Vis. Comput 24, 709–717 (2008)

    Google Scholar 

  61. Liu, C., et al.: Beyond pixels: exploring new representations and applications for motion analysis. Ph.D. thesis, Massachusetts Institute of Technology (2009)

  62. Kang, S.J., Yoo, S., Kim, Y.: Dual motion estimation for frame rate up-conversion. IEEE Trans. Circuits Syst. Video Technol. 20, 1909–1914 (2010)

    Google Scholar 

  63. Keller, S.H., Lauze, F., Nielsen, M.: Temporal super resolution using variational methods. In: High-Quality Visual Experience, pp. 275–296. Springer, Berlin (2010)

  64. Lee, W.H., Choi, K., Ra, J.B.: Frame rate up conversion based on variational image fusion. IEEE Trans. Image Process. 23, 399–412 (2014)

    MathSciNet  MATH  Google Scholar 

  65. Kim, U.S., Sunwoo, M.H.: New frame rate up-conversion algorithms with low computational complexity. IEEE Trans. Circuits Syst. Video Technol. 24, 384–393 (2014)

    Google Scholar 

  66. Li, W., Cosker, D.: Video interpolation using optical flow and Laplacian smoothness. arXiv:1603.08124 (2017)

  67. Li, W., Cosker, D., Brown, M., Tang, R.: Optical flow estimation using Laplacian mesh energy. In: 2013 IEEE Conference on Computer Vision and Pattern Recognition pp. 2435–2442 (2013)

  68. Garg, R., Roussos, A., Agapito, L.: Robust trajectory-space tv-l1 optical flow for non-rigid sequences. In: EMMCVPR (2011)

  69. Patraucean, V., Handa, A., Cipolla, R.: Spatio-temporal video autoencoder with differentiable memory. arXiv:1511.06309 (2015)

  70. Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: Flownet 2.0: evolution of optical flow estimation with deep networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 1647–1655 (2017)

  71. Janai, J., Güney, F., Wulff, J., Black, M.J., Geiger, A.: Slow flow: exploiting high-speed cameras for accurate and diverse optical flow reference data. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 1406–1416 (2017)

  72. Zhang, T., Bai, H., Li, F., Zhao, Y.: Optical flow-guided multiscale dense network for frame interpolation. In: 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) pp. 1061–1065 (2018)

  73. Meyer, S.,Wang, O., Zimmer, H., Grosse,M., Sorkine-Hornung, A.: Phase-based frame interpolation for video. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 1410–1418 (2015)

  74. Revaud, J.,Weinzaepfel, P.,Harchaoui, Z., Schmid, C.: Epicflow: edge-preserving interpolation of correspondences for optical flow. In: 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 1164–1172 (2015)

  75. Farnebäck, G.: Two-framemotion estimation based on polynomial expansion. In: SCIA (2003)

  76. Jayashankar, T.,Moulin, P., Blu, T., Gilliam, C.: Lap-based video frame interpolation. In: 2019 IEEE International Conference on Image Processing (ICIP) pp. 4195–4199 (2019)

  77. Gilliam, C., Blu, T.: Local all-pass filters for optical flow estimation. In: 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) pp. 1533–1537 (2015)

  78. Li, S., Xu, X., Pan, Z., Sun, W.: Quadratic video interpolation for VTSR challenge. In: 2019 IEEE/CVF International Conference on Computer Vision Workshop(ICCVW) pp. 3427–3431 (2019)

  79. Ronneberger, O., Fischer, P., Brox, T.: U-net: Convolutional networks for biomedical image segmentation. In: MICCAI (2015)

  80. Xu, X., Sun, D., Liu, S., Ren, W., Zhang, Y., Yang, M.H., Sun, J.: Rendering portraitures from monocular camera and beyond. In: ECCV (2018)

  81. Shen, W., Bao, W., Zhai, G., Chen, L., Min, X., Gao, Z. .: Blurry video frame interpolation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 5113–5122 (2020)

  82. Wang, X., Chan, K.C.K., Yu, K., Dong, C., Loy, C.C.: EDVR: Video restoration with enhanced deformable convolutional networks. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) pp. 1954–1963 (2019)

  83. Xin Tao, Gao, H., Wang, Y., Shen, X., Wang, J., Jia, J.: Scale recurrent network for deep image deblurring. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition pp. 8174–8182 (2018)

  84. Yu S., Park, B., Jeong, J.: Posnet: 4x video frame interpolation using position-specific flow. In: 2019 IEEE/CVF International Conference on Computer VisionWorkshop (ICCVW) pp. 3503–3511 (2019)

  85. M. Haris, Shakhnarovich, G., Ukita, N.: Recurrent backprojection network for video super-resolution. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 3892–3901 (2019)

  86. Lu, Q., Xu, N., Fang, X.: Motion-compensated frame interpolation with multiframe-based occlusion handling. J. Display Technol. 12, 45–54 (2016)

    Google Scholar 

  87. Zhang, Y., Xu, L., Ji, X., Dai, Q.: A polynomial approximation motion estimation model for motion-compensated frame interpolation. IEEE Trans. Circuits Syst. Video Technol. 26, 1421–1432 (2016)

    Google Scholar 

  88. Niklaus, S., Liu, F.: Softmax splatting for video frame interpolation. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 5436–5445 (2020)

  89. Krishnamurthy, R., Woods, J., Moulin, P.: Frame interpolation and bidirectional prediction of video using compactly encoded optical-flow fields and label fields. IEEE Trans. Circuits Syst. Video Technol. 9, 713–726 (1999)

    Google Scholar 

  90. Jeon, B.W., Lee, G., Lee, S., Park, R.: Coarse-to-fine frame interpolation for frame rate up-conversion using pyramid structure. IEEE Trans. Consumer Electron. 49, 499–508 (2003)

    Google Scholar 

  91. Li, Y., Ma, W., Han, Y.: A spatial prediction-based motion-compensated frame rate up-conversion. Future Internet 11, 26 (2019)

    Google Scholar 

  92. Liu, H., Xiong, R., Zhao, D., Ma, S., Gao, W.: Multiple hypotheses bayesian frame rate up-conversion by adaptive fusion of motion-compensated interpolations. IEEE Trans. Circuits Syst. Video Technol. 22, 1188–1198 (2012)

    Google Scholar 

  93. Choi, D., Song, W., Choi, H., Kim, T.: Map-based motion refinement algorithm for block-based motion-compensated frame interpolation. IEEE Trans. Circuits Syst. Video Technol. 26, 1789–1804 (2016)

    Google Scholar 

  94. Jacobson, N., Lee, Y.L., Mahadevan, V., Vasconcelos, N., Nguyen, T.: A novel approach to fruc using discriminant saliency and frame segmentation. IEEE Trans. Image Process. 19, 2924–2934 (2010)

    MathSciNet  MATH  Google Scholar 

  95. Wang, J., Patel, N., Grosky, W.: Video frame rate up conversion using region based motion compensation. In: 2004 IEEE Electro/Information Technology Conference pp. 143–157 (2004)

  96. Lim, H., Park, H.W.: A region-based motion-compensated frame interpolation method using a variance-distortion curve. IEEE Trans. Circuits Syst. Video Technol. 25, 518–524 (2015)

    Google Scholar 

  97. Lim, H., Kim, D.Y., Park, H., Cho, J., Park, S.H., Kim, J.: Motion estimation with adaptive block size for motion-compensated frame interpolation. In: 2012 Picture Coding Symposium pp. 325–328 (2012)

  98. Lim, H., Park, H.W.: A symmetric motion estimation method for motion-compensated frame interpolation. IEEE Trans. Image Process. 20, 3653–3658 (2011)

    MathSciNet  MATH  Google Scholar 

  99. Choi, B.D., Han, J.W., Kim, C.S., Ko, S.: Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation. IEEE Trans. Circuits Syst. Video Technol 17, 407–416 (2007)

    Google Scholar 

  100. Zhai, J., Yu, K., Li, J., Li, S.: A low complexity motion compensated frame interpolation method. In: 2005 IEEE International Symposium on Circuits and Systems, Vol. 5 pp. 4927–4930 (2005)

  101. Kang, S.J., Cho, K.R., Kim, Y.H.: Motion compensated frame rate up-conversion using extended bilateral motion estimation. IEEE Trans. Consumer Electr 53, 1759 (2007)

    Google Scholar 

  102. Kang, S.J., Yoo, D., Lee, S., Kim, Y.: Multiframe-based bilateral motion estimation with emphasis on stationary caption processing for frame rate up-conversion. IEEE Trans. Consumer Electr. 54, 1830 (2008)

    Google Scholar 

  103. ShashiKiran, S., SrinivasBabu, N., Divakaran, R., MohmadAshiq, A., Arunkumar, B., Rohit, J.: True-motion estimation algorithm and its application to motion-compensated temporal frame interpolation. Int. J. Innovat. Res. Electr. Electron. Instrum. Contr. Eng. 5, 297–307 (2017)

    Google Scholar 

  104. Kim, D.Y., Lim, H., Park, H.: Iterative true motion estimation for motion-compensated frame interpolation. IEEE Trans. Circuits Syst. Video Technol. 23, 445–454 (2013)

    Google Scholar 

  105. Jiang, Y., Yang, X., Feng, Z., Xia, Y.: An efficient 3d video frame interpolation method using color-depth-motion information. In: 2017 4th International Conference on Information, Cybernetics and Computational Social Systems (ICCSS) pp. 77–80 (2017)

  106. Yang, X., Liu, J., Sun, J., Lee, Y., Nguyen, T.: Depth-assisted frame rate up-conversion for stereoscopic video. IEEE Signal Process. Lett. 21, 423–427 (2014)

    Google Scholar 

  107. Zhao, Y., Ge, G., Sun, Q.: Frame rate up-conversion based on edge information. In: 2019 7th International Conference on Information, Communication and Networks (ICICN) pp. 158–162 (2019)

  108. Li, R., Ma, W., Li, Y., You, L.: A low-complex frame rate upconversion with edge-preserved filtering. Electronics 9, 156 (2020)

    Google Scholar 

  109. Bao, W., Zhang, X., Chen, L., Ding, L., Gao, Z.: High-order model and dynamic filtering for frame rate up-conversion. IEEE Trans. Image Process. 27, 3813–3826 (2018)

    MathSciNet  MATH  Google Scholar 

  110. Van, X.H.: Statistical search range adaptation solution for effective frame rate up-conversion. IET Image Proc. 12, 113–120 (2018)

    Google Scholar 

  111. Koren, M., Menda, K., Sharma, A.: Frame interpolation using generative adversarial networks. (2017)

  112. Hu, Z., Ma, Y., Ma, L.: Multi-scale video frame-synthesis network with transitive consistency loss. 1–12. arXiv:1712.02874 (2017)

  113. Li, C., Gu, D., Ma, X., Yang, K., Liu, S., Jiang, F.: Video frame interpolation based on multi-scale convolutional network and adversarial training. In: 2018 IEEE Third International Conference on Data Science in Cyberspace (DSC) pp. 553–560 (2018)

  114. Gulrajani, I., Ahmed, F., Arjovsky, M., Dumoulin, V., Courville, A.C.: Improved training of wasserstein gans. In: NIPS (2017)

  115. Lin, T.Y., Dollár, P., Girshick, R.B., He, K., Hariharan, B., Belongie, S.J.: Feature pyramid networks for object detection. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) pp. 936–944 (2017)

  116. Xue, W., Ai, H., Sun, T., Song, C., Huang, Y., Wang, L.: Framegan: increasing the frame rate of gait videos with generative adversarial networks. Neurocomputing 380, 95–104 (2020)

    Google Scholar 

  117. Wen, S., Liu, W., Yang, Y., Huang, T., Zeng, Z.: Generating realistic videos from keyframes with concatenated gans. IEEE Trans. Circuits Syst. Video Technol. 29, 2337–2348 (2019)

    Google Scholar 

  118. Li, J., Shen, X.: Image blocking parallel processing approaches to a normalized product correlation image matching algorithm. Mini-Micro Syst, 25(11) (2004)

  119. Xiao, J., Bi, X.: Multi-scale attention generative adversarial networks for video frame interpolation. IEEE Access 8, 94842–94851 (2020)

    Google Scholar 

  120. Didyk, P., Sitthi-amorn, P., Freeman, W., Durand, F., Matusik, W.: Joint view expansion and filtering for automultiscopic 3d displays. ACM Trans. Graph. 32, 221:1–221:8 (2013)

    Google Scholar 

  121. Kim, S.Y., Oh, J., Kim, M.: FISR: deep joint frame interpolation and super-resolution with a multi-scale temporal loss. In: AAAI (2020)

  122. Chen,W., Fu, Z., Yang, D., Deng, J.: Single-image depth perception in the wild. In: NIPS (2016)

  123. Myungsub C., Choi, J., Baik, S., Kim, T., Lee, K.M.: Scene adaptive video frame interpolation via meta-learning. In: 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) pp. 9441–9450 (2020)

  124. Choi, M., Kim, H., Han, B., Xu, N., Lee, K. M.: Channel attention is all you need for video frame interpolation. In: AAAI Conference on Artificial Intelligence, pp. 10 663–10 671 (2020)

  125. Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. arXiv:1807.02758 (2018)

Download references

Funding

In this study, there is no funding involved from any agency.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anil Singh Parihar.

Ethics declarations

Conflict of interest

The authors declare that they have no conflicts of interest.

Human and animal rights

This article does not contain any studies involving humans or animals performed by any of the authors.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Parihar, A.S., Varshney, D., Pandya, K. et al. A comprehensive survey on video frame interpolation techniques. Vis Comput 38, 295–319 (2022). https://doi.org/10.1007/s00371-020-02016-y

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00371-020-02016-y

Keywords

Navigation