Skip to main content
Log in

Cameras Seeing Cameras Geometry

  • Published:
Advances in Applied Clifford Algebras Aims and scope Submit manuscript

Abstract

We study several theoretical aspects of both 2D and 3D intra multi-view geometry of calibrated cameras when all that they can reliably recognize is each other. Starting with minimal reconstructable configurations, we propose a method for obtaining the position-orientation structure of such camera ensembles, up to a global similarity. In the 3D setting we base our analysis on Rodrigues’ vector techniques familiar from mechanics and robotics. We also examine the average number of visible cameras and discuss some kinematic aspects of the problem.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9

Similar content being viewed by others

Notes

  1. the focal distance may be set to one.

  2. with the exception of the case \(\mathbf{u}=-\mathbf{v}\), in which \(\mathbf{c}\) is infinite in magnitude and oriented arbitrarily in the plane \(\mathbf{u}^\perp \).

References

  1. Arun, K.S., Huang, T.S., Blostein, S.D.: Least-squares fitting of two 3-D point sets. IEEE Trans. Pattern Anal. Mach. Intell. 9(5), 698–700 (1987)

    Article  Google Scholar 

  2. Aspnes, J., Eren, T., Goldenberg, D.K., Morse, A.S., Whiteley, W., Yang, Y.R., Anderson, B.D.O., Belhumeur, P.N.: A theory of network localization. IEEE Trans. Mob. Comput. 5(12), 1663–1678 (2006)

    Article  Google Scholar 

  3. Brezov, D.S.: Projective bivector parametrization of isometries in low dimensions. In: Proceedings of the Nineteenth International Conference on Geometry, Integrability and Quantization. pp. 91–104. Avangard Prima, Sofia, Bulgaria (2018). https://doi.org/10.7546/giq-19-2018-91-104

  4. Brezov, D.S., Mladenova, C.D., Mladenov, I.M.: From the kinematics of precession motion to generalized rabi cycles. Adv. Math. Phys. 2018 (2018)

  5. Cao, M.W., Jia, W., Zhao, Y., Li, S.J., Liu, X.P.: Fast and robust absolute camera pose estimation with known focal length. Neural Comput. Appl. 29(5), 1383–1398 (2018). https://doi.org/10.1007/s00521-017-3032-6

    Article  Google Scholar 

  6. En, S., Lechervy, A., Jurie, F.: Rpnet: an end-to-end network for relative camera pose estimation. In: Proceedings of the European Conference on Computer Vision (ECCV) (2018)

  7. Eren, T., Goldenberg, O., Whiteley, W., Yang, Y.R., Morse, A.S., Anderson, B.D., Belhumeur, P.N.: Rigidity, computation, and randomization in network localization. In: IEEE INFOCOM 2004. vol. 4, pp. 2673–2684. IEEE (2004)

  8. Faugeras, O.D., Hebert, M.: The representation, recognition, and locating of 3-D objects. Int. J. Rob. Res. 5(3), 27–52 (1986). https://doi.org/10.1177/027836498600500302

    Article  Google Scholar 

  9. Halperin, T., Werman, M.: An epipolar line from a single pixel. In: 2018 IEEE Winter Conference on Applications of Computer Vision (WACV). pp. 983–991 (2018)

  10. Hartley, R., Zisserman, A.: Multiple View Geometry in Computer Vision, 2nd edn. Cambridge University Press, New York (2003)

    MATH  Google Scholar 

  11. Kasten, Y., Werman, M.: Two view constraints on the epipoles from few correspondences. In: 2018 25th IEEE International Conference on Image Processing (ICIP). pp. 888–892 (Oct 2018). https://doi.org/10.1109/ICIP.2018.8451727

  12. Kenmogne, I.F., Drevelle, V., Marchand, E.: Cooperative localization of drones by using interval methods. Acta Cybernetica 24(3), 557–572 (Mar 2020). https://doi.org/10.14232/actacyb.24.3.2020.15. https://cyber.bibl.u-szeged.hu/index.php/actcybern/article/view/4059

  13. Levi, N., Werman, M.: The viewing graph. 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings, vol. 1, pp. I–I (2003)

  14. Li, H.C.: Average length of chords drawn from a point to a circle. Pi Mu Epsil. J. 8(3), 146–150 (1985)

    Google Scholar 

  15. Pasquetti, M., Michieletto, G., Zhao, S., Zelazo, D., Cenedese, A.: A unified dissertation on bearing rigidity theory. arXiv preprint arXiv:1902.03101 (2019)

  16. Piña, E.: A new parametrization of the rotation matrix. Am. J. Phys. 51(4), 375–379 (1983). https://doi.org/10.1119/1.13253

    Article  ADS  Google Scholar 

  17. Sato, J.: Recovering epipolar geometry from mutual projections of multiple cameras. Int. J. Comput. Vis. 66(2):123–140 (2006)

  18. Smale, S.: Mathematical problems for the next century. Math. Intell. 20(2), 7–15 (1998)

    Article  MathSciNet  Google Scholar 

  19. Taylor, C.J., Spletzer, J.: A bounded uncertainty approach to cooperative localization using relative bearing constraints. In: 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 2500–2506. IEEE (2007)

  20. Zelazo, D., Zhao, S.: Formation control and rigidity theory 17:1–16 (2019)

  21. Zhang, F., Kumar, V., Pereira, G.A.: Necessary and sufficient conditions for localization of multiple robot platforms. In: International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, vol. 46954, pp. 13–21 (2004)

  22. Zhao, S., Zelazo, D.: Bearing rigidity theory and its applications for control and estimation of network systems: life beyond distance rigidity. IEEE Control Syst. Mag. 39(2), 66–83 (2019)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Danail Brezov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection on Proceedings ICCA 12, Hefei, 2020, edited by Guangbin Ren, Uwe Kähler, Rafal Ablamowicz, Fabrizio Colombo, Pierre Dechant, Jacques Helmstetter, G. Stacey Staples, Wei Wang.

This research was supported by the DFG.

Appendix: How Many Cameras does Each Camera See on Average?

Appendix: How Many Cameras does Each Camera See on Average?

Here we treat the number of sighting of other cameras in a sphere as a function of the distance to the center, orientation and FOV. In the two-dimensional case we consider a point \(z_0\) in a circle of radius r at distance from its center \(d\in [0,r]\). It is convenient to introduce polar coordinates \(\rho \in [0,r]\), \(\theta \in [0,2\pi )\) choosing \(z_0\) as the origin. The circle’s boundary is viewed from the perspective of \(z_0\) as a curve with polar equation (see [14] for a derivation)

$$\begin{aligned} \rho (\theta ) = d\cos {\theta }+ \sqrt{r^2-d^2\!\sin ^2\!{\theta }} \end{aligned}$$
(37)

and it is straightforward to obtain the area of the disc segment sliced by the viewing angle \(\theta \in [a,b]\) at \(z_0\) as

$$\begin{aligned} A = \int \limits _{a}^{b}\int \limits _0^{\rho (\theta )}{\rho \,\mathrm{d}\rho \,\mathrm{d}\theta }. \end{aligned}$$

Choosing a polar orientation \(\phi \in [0,2\pi )\) for the camera at \(z_0\) and denoting the width of the field of view \(FOV = 2\delta \), one obtains the above integral as a a function of the parameters \(\phi \) and d, or more conveniently \(\epsilon = d/r\) (keeping r and \(\delta \) fixed), namely

$$\begin{aligned} A(\phi , \epsilon ) = \frac{r^2}{2}\!\int \limits _ {\phi \!-\!\delta }^{\phi \!+\!\delta } {\left( 1\!+\epsilon ^2\cos {2\theta }\!+2\epsilon \cos {\theta }\sqrt{1\!-\!\epsilon ^2\sin ^2\!{\theta }}\right) \!\mathrm{d}\theta }. \end{aligned}$$

The first two terms are trivial, while for the third one we use integration by parts, finally arriving at

$$\begin{aligned} \displaystyle A(\phi , \epsilon )= & {} \delta r^2 + \left( \frac{\epsilon r}{2}\right) ^2\sin (2\theta ) \Big |_{\phi \!-\!\delta }^{\phi \!+\!\delta } \nonumber \\&+ \frac{r^2}{2}\left( \epsilon \sin {\theta }\sqrt{1-\epsilon ^2\!\sin ^2\theta }+\arcsin {\epsilon \sin {\theta }}\right) \Big |_{\phi \!-\!\delta }^{\phi \!+\!\delta } \end{aligned}$$
(38)

e.g. on the boundary of the unit circle \(\epsilon \!= \!r\! =\! 1\) one has

$$\begin{aligned} A = 2\delta + \cos { 2\delta } \cos {2\phi } \end{aligned}$$

while for an arbitrarily placed camera pointing towards the center (note that we always assume \(\delta \in [0,\pi ]\))

$$\begin{aligned} A = \delta + {\tilde{\delta }} + \sin {{\tilde{\delta }}}(\cos {\delta }+\cos {{\tilde{\delta }}}), \qquad {\tilde{\delta }} = \arcsin {\epsilon \sin {\delta }}. \end{aligned}$$

Using formula (38) one obtains an estimate for the geometric probability

$$\begin{aligned} P = A/ A_0,\qquad A_0=\pi r^2 \end{aligned}$$

that in the case of the uniform distribution corresponds to the relative number of agents seen by the camera at \(z_0\). The rotational symmetry allows us to work with the above-chosen range for the parameters \(\epsilon \in [0, 1]\), \(\phi \in [0,2\pi )\) and then integrate \(P(\phi ,\epsilon )\) dividing by \(2\pi \) in order to obtain the geometric probability and thus, the average number of cameras seen by an arbitrary camera

$$\begin{aligned} \langle {P} \rangle = \frac{1}{2\pi ^2 r^2} \int \limits _{0}^{1}\!\int \limits _{0}^{2\pi } A(\phi , \epsilon )\, \mathrm{d}\phi \,\mathrm{d}\epsilon = \frac{\delta }{\pi }=\frac{FOV}{2\pi } \end{aligned}$$
(39)

where we use the periodicity of the trigonometric terms in (38) with respect to \(\phi \) and get the same result as in (21). Note that since the nonzero contribution to (39) does not depend explicitly on \(\epsilon \), the above relation holds for any point on the circle. This result may also be derived using symmetries of circles and spheres, which is preferable especially in the 3D case where explicit calculations are nontrivial except for a camera positioned at the center: then the observed domain becomes a spherical sector with volume proportional, with a factor equal to one third of the radius, to the solid viewing angle

$$\begin{aligned} FOV = 2\pi (1-\cos {\delta }). \end{aligned}$$

For an arbitrary position and orientation however averaging factors out the complexity just as in the planar case (or the physical setting of a charged spherical shell) and formula (27) still holds as if the camera was at the center.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Brezov, D., Werman, M. Cameras Seeing Cameras Geometry. Adv. Appl. Clifford Algebras 32, 30 (2022). https://doi.org/10.1007/s00006-022-01211-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00006-022-01211-5

Navigation