Skip to main content
Log in

Predictive Online Optimisation with Applications to Optical Flow

  • Published:
Journal of Mathematical Imaging and Vision Aims and scope Submit manuscript

Abstract

Online optimisation revolves around new data being introduced into a problem while it is still being solved; think of deep learning as more training samples become available. We adapt the idea to dynamic inverse problems such as video processing with optical flow. We introduce a corresponding predictive online primal-dual proximal splitting method. The video frames now exactly correspond to the algorithm iterations. A user-prescribed predictor describes the evolution of the primal variable. To prove convergence we need a predictor for the dual variable based on (proximal) gradient flow. This affects the model that the method asymptotically minimises. We show that for inverse problems the effect is, essentially, to construct a new dynamic regulariser based on infimal convolution of the static regularisers with the temporal coupling. We finish by demonstrating excellent real-time performance of our method in computational image stabilisation and convergence in terms of regularisation theory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Notes

  1. The total variation term in (1.3) in principle requires \(x \in {\mathrm {BV}}(\varOmega )\), the space of functions of bounded variation on \(\varOmega \). This is not a Hilbert space, but merely a Banach space, where our overall setup (1.1) does not to apply. However, due to the weak(-\(*\)) lower semicontinuity of convex functionals, any minimiser of (1.3) necessarily lies in \(L^2(\varOmega ) \cap {\mathrm {BV}}(\varOmega )\), so we are justified in working in the Hilbert space \(X=L^2(\varOmega )\), and seeing \({\mathrm {BV}}(\varOmega )\) as a constraint imposed by the total variation term.

  2. Any coercive, convex, proper, lower semicontinuous function \(E: X \rightarrow {\overline{\mathbb {R}}}\) has a minimiser \({\hat{x}}\). By the Fermat principle \(0 \in \partial E({\hat{x}})\). Thus, \({\hat{x}}\in \partial E^*(0)\), which says exactly that \(E^* \ge E^*(0)\).

  3. The double arrow signifies that the map is set-valued.

  4. Then (5.4) gives \(\varLambda _\mathcal {V}=1\). Constant true displacements are allowed by Lemma 5.1, but constant measurements not. If \(\Vert x-{\bar{x}}\Vert _{L^2(\varOmega + B(0, \Vert u\Vert ))}^2 \le C \Vert x-{\bar{x}}\Vert _{L^2(\varOmega )}^2\) then Lemma 5.1 and Theorem 5.1 extend to \(\varLambda > C\varLambda _\mathcal {V}\). In practise, to compute \(x \circ v\), we extrapolate x outside \(\varOmega \) such that Neumann boundary conditions are satisfied.

  5. To obtain the linearised optical flow model, we start with \(b_{k+1}(\xi )=b_{k}(v_k(\xi ))\) holding for all \(\xi \in \varOmega \) and a sufficiently smooth image \(b_{k}\). By Taylor expansion \(b_{k}(v_k(\xi )) \approx b_{k}(\xi ) + \langle \nabla b_{k}(\xi ),v_k(\xi )-\xi \rangle \). Thus \(0 = b_{k+1}(\xi )-b_{k}(v_k(\xi )) \approx b_{k+1}(\xi )-b_{k}(\xi ) + \langle \nabla b_{k}(\xi ),\xi -v_k(\xi )\rangle \).

  6. Indeed, in linearised optical flow the displacement field cannot in general be discontinuous. See [13, 29] for approaches designed to avoid this restriction.

References

  1. Aragón Artacho, F.J., Geoffroy, M.H.: Characterization of metric regularity of subdifferentials. J. Convex Anal. 15(2), 365–380 (2008)

    MathSciNet  MATH  Google Scholar 

  2. Bastianello, N., Simonetto, A., Carli, R.: Prediction-correction splittings for time-varying optimization with intermittent observations. IEEE Control Syst. Lett. 4(2), 373–378 (2020). https://doi.org/10.1109/LCSYS.2019.2930491

    Article  Google Scholar 

  3. Bauschke, H.H., Combettes, P.L.: Convex Analysis and Monotone Operator Theory in Hilbert Spaces, 2 edn. CMS Books in Mathematics. Springer (2017). https://doi.org/10.1007/978-3-319-48311-5

  4. Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imag. Sci. 2(1), 183–202 (2009). https://doi.org/10.1137/080716542

    Article  MathSciNet  MATH  Google Scholar 

  5. Becker, F., Petra, S., Schnörr, C.: Optical flow. In: O. Scherzer (ed.) Handbook of Mathematical Methods in Imaging, pp. 1945–2004. Springer (2015). https://doi.org/10.1007/978-1-4939-0790-8_38

  6. Belmega, E.V., Mertikopoulos, P., Negrel, R., Sanguinetti, L.: Online convex optimization and no-regret learning: Algorithms, guarantees and applications (2018)

  7. Bezanson, J., Edelman, A., Karpinski, S., Shah, V.B.: Julia: A fresh approach to numerical computing. SIAM Rev. 59(1), 65–98 (2017)

    Article  MathSciNet  Google Scholar 

  8. Biegler, L., Ghattas, O., Heinkenschloss, M., Keyes, D., Waanders, B.: Real-Time PDE-Constrained Optimization. SIAM, Computational Science and Engineering (2007)

  9. Bousquet, O., Bottou, L.: The tradeoffs of large scale learning. Adv. Neural Inf. Process. Syst. 20, 161–168 (2008)

    Google Scholar 

  10. Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging. Vis. 20(1), 89–97 (2004). https://doi.org/10.1023/B:JMIV.0000011325.36760.1e

    Article  MathSciNet  MATH  Google Scholar 

  11. Chambolle, A., Pock, T.: A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 40, 120–145 (2011). https://doi.org/10.1007/s10851-010-0251-1

    Article  MathSciNet  MATH  Google Scholar 

  12. Chaudhury, K., Mehrotra, R.: A trajectory-based computational model for optical flow estimation. IEEE Trans. Robot. Autom. 11(5), 733–741 (1995). https://doi.org/10.1109/70.466611

    Article  Google Scholar 

  13. Chen, K., Lorenz, D.A.: Image sequence interpolation based on optical flow, segmentation, and optimal control. IEEE Trans. Image Process. (2012). https://doi.org/10.1109/TIP.2011.2179305

    Article  MathSciNet  MATH  Google Scholar 

  14. Clason, C., Valkonen, T.: Introduction to nonsmooth analysis and optimization (2020). https://tuomov.iki.fi/m/nonsmoothbook_part.pdf. Work in progress

  15. Engl, H., Hanke, M., Neubauer, A.: Regularization of Inverse Problems. Springer, Mathematics and Its Applications (2000)

  16. Franzen, R.: Kodak lossless true color image suite. PhotoCD PCD0992. Lossless, true color images released by the Eastman Kodak Company (1999). http://r0k.us/graphics/kodak/

  17. Grötschel, M., Krumke, S., Rambau, J.: Online Optimization of Large Scale Systems. Springer, New York (2013)

    MATH  Google Scholar 

  18. Hall, E., Willett, R.: Dynamical models and tracking regret in online convex programming. In: S. Dasgupta, D. McAllester (eds.) Proceedings of the 30th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 28, pp. 579–587. PMLR, Atlanta, Georgia, USA (2013)

  19. Hazan, E.: Introduction to online convex optimization. Found. Trends Optim. 2(3–4), 157–325 (2016). https://doi.org/10.1561/2400000013

    Article  Google Scholar 

  20. Horn, B.K., Schunck, B.G.: Determining optical flow. In: Proc. SPIE, vol. 0281, pp. 319–331. SPIE (1981). https://doi.org/10.1117/12.965761

  21. Iglesias, J.A., Kirisits, C.: Convective regularization for optical flow, pp. 184–201. De Gruyter (2016). https://doi.org/10.1515/9783110430394

  22. Jauhiainen, J., Kuusela, P., Seppänen, A., Valkonen, T.: Relaxed Gauss-Newton methods with applications to electrical impedance tomography. SIAM J. Imaging Sci. 13, 1415–1445 (2020). https://doi.org/10.1137/20M1321711

    Article  MathSciNet  MATH  Google Scholar 

  23. Nagel, H.H.: Extending the ‘oriented smoothness constraint’ into the temporal domain and the estimation of derivatives of optical flow. In: Faugeras, O. (ed.) Computer Vision–ECCV 90, pp. 139–148. Springer, Berlin (1990)

    Chapter  Google Scholar 

  24. Nagel, H.H., et al.: Constraints for the estimation of displacement vector fields from image sequences. In: Proceedings of the Eighth International Joint Conference on Artificial Intelligence (II), vol. 2, pp. 945–951. IJCAI (1983)

  25. Orabona, F.: A modern introduction to online learning (2020)

  26. Salgado, A., Sánchez, J.: Temporal constraints in large optical flow estimation. In: R. Moreno Díaz, F. Pichler, A. Quesada Arencibia (eds.) Computer Aided Systems Theory–EUROCAST 2007, pp. 709–716. Springer, Berlin (2007)

  27. Simonetto, A.: Time-varying convex optimization via time-varying averaged operators (2017)

  28. Tico, M.: Digital image stabilization. In: A.A. Zaher (ed.) Recent Advances in Signal Processing, chap. 1. IntechOpen, Rijeka (2009). https://doi.org/10.5772/7458

  29. Valkonen, T.: Transport equation and image interpolation with SBD velocity fields. Journal de mathématiques pures et appliquées 95, 459–494 (2011). https://doi.org/10.1016/j.matpur.2010.10.010

    Article  MathSciNet  MATH  Google Scholar 

  30. Valkonen, T.: Testing and non-linear preconditioning of the proximal point method. Applied Mathematics and Optimization (2018). https://doi.org/10.1007/s00245-018-9541-6

  31. Valkonen, T.: First-order primal-dual methods for nonsmooth nonconvex optimisation (2019). https://tuomov.iki.fi/m/firstorder.pdf. Submitted

  32. Valkonen, T.: Julia codes for “predictive online optimisation with applications to optical flow”. Software on Zenodo (2020). https://doi.org/10.5281/zenodo.3659180

    Article  Google Scholar 

  33. Valkonen, T.: Preconditioned proximal point methods and notions of partial subregularity. Journal of Convex Analysis (2020). (in press)

  34. Volz, S., Bruhn, A., Valgaerts, L., Zimmer, H.: Modeling temporal coherence for optical flow. In: 2011 International Conference on Computer Vision, pp. 1116–1123. IEEE (2011). https://doi.org/10.1109/ICCV.2011.6126359

  35. Weickert, J., Schnörr, C.: Variational optic flow computation with a spatio-temporal smoothness constraint. J. Math. Imaging Vis. 14(3), 245–255 (2001). https://doi.org/10.1023/A:1011286029287

    Article  MATH  Google Scholar 

  36. Zhang, Y., Ravier, R.J., Tarokh, V., Zavlanos, M.M.: Distributed online convex optimization with improved dynamic regret (2019)

  37. Zhou, J., Hubel, P., Tico, M., Schulze, A.N., Toft, R.: Image registration methods for still image stabilization. US Patent 9,384,552 (2016)

  38. Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent. In: Proceedings of the 20th International Conference on Machine Learning (ICML-03), pp. 928–936. AAAI (2003)

Download references

Acknowledgements

This research has been supported by Escuela Politécnica Nacional internal Grant PIJ-18-03 and Academy of Finland Grants 314701 and 320022.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tuomo Valkonen.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Local Strong Convexity

Local Strong Convexity

We establish local strong convexity of the indicator function of the ball. This has been shown in [1] to be equivalent to the strong metric subregularity of the subdifferential. For related characterisations, see also [33] and regarding total variation [22, appendix].

Lemma A.1

With \(F: X \rightarrow \mathbb {R}\), \(F=\delta _{{{\,\mathrm{cl}\,}}B(0, \alpha )}\) on a Hilbert space X, suppose \(x \in \partial B(0, \alpha )\) and \(0 \ne x^* \in \partial F(x)\). Then,

$$\begin{aligned} F(x')-F(x) \ge \langle x^*,x'-x\rangle + \frac{\gamma }{2}\Vert x'-x\Vert ^2 \quad (x' \in U_x) \end{aligned}$$

for

$$\begin{aligned} U_x = {\left\{ \begin{array}{ll} X, &{} 0 \le \gamma \alpha \le \Vert x^*\Vert , \\ {[}{{\,\mathrm{cl}\,}}B(0,\alpha )]^c \cup {{\,\mathrm{cl}\,}}B(x, \alpha ), &{} \alpha \gamma > \Vert x^*\Vert . \end{array}\right. } \end{aligned}$$

Proof

Observe that \(x^*=\lambda x\) for \(\lambda :=\Vert x^*\Vert /\alpha \). If \(x' \not \in {{\,\mathrm{cl}\,}}B(0, \alpha )\), there is nothing to prove. So take \(x' \in {{\,\mathrm{cl}\,}}B(0, \alpha )\). Then, we need \( 0 \ge \lambda \langle x,x'-x\rangle + \frac{\gamma }{2}\Vert x'-x\Vert ^2. \) Since \(\Vert x\Vert =\alpha \), this says

$$\begin{aligned} \left( \lambda -\frac{\gamma }{2}\right) \alpha ^2 \ge \frac{\gamma }{2}\Vert x'\Vert ^2 + \left( \lambda -\gamma \right) \langle x,x'\rangle . \end{aligned}$$
(A.1)

Suppose \(\gamma \le \lambda \), which is the first case of \(U_x\). Then (A.1) is seen to hold by application of Young’s inequality on the inner product term, followed by \(\Vert x'\Vert \le \alpha \).

If on the other hand, \(\gamma > \lambda \), which is the second case of \(U_x\), we take \(x' \in {{\,\mathrm{cl}\,}}B(x, \alpha ) \cap {{\,\mathrm{cl}\,}}B(0,\alpha )\). This implies \( \langle x',x\rangle \ge \tfrac{1}{2}\Vert x'\Vert ^2. \) Since \(\lambda -\gamma <0\), this and \(\Vert x'\Vert \le \alpha \) prove (A.1). \(\square \)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Valkonen, T. Predictive Online Optimisation with Applications to Optical Flow. J Math Imaging Vis 63, 329–355 (2021). https://doi.org/10.1007/s10851-020-01000-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10851-020-01000-4

Keywords

Navigation