Skip to main content

Advertisement

Log in

Taking Better Advantage of Fold Axis Data to Characterize Anisotropy of Complex Folded Structures in the Implicit Modeling Framework

  • Published:
Mathematical Geosciences Aims and scope Submit manuscript

Abstract

When too few field measurements are available for the geological modeling of complex folded structures, the results of implicit methods typically exhibit an unsatisfactory bubbly aspect. However, in such cases, anisotropy data are often readily available but not fully exploited. Among them, fold axis data are a straightforward indicator of this local anisotropy direction. Focusing on the so-called potential field method, this work aims to evaluate the effect of the incorporation of such data into the modeling process. Given locally sampled fold axis data, this paper proposes to use the second-order derivatives of the scalar field in addition to the existing first-order ones. The mathematical foundation of the approach is developed, and the respective efficiencies of both kinds of constraints are tested. Their integration and impact are discussed based on a synthetic case study, thereby providing practical guidelines to geomodeling tool users on the parsimonious use of data for the geological modeling of complex folded structures.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

References

  • Boisvert J, Manchuk J, Deutsch C (2009) Kriging in the presence of locally varying anisotropy using non-Euclidean distances. Math Geosci 41(5):585–601

    Article  Google Scholar 

  • Calcagno P, Chilès J-P, Courrioux G, Guillen A (2008) Geological modelling from field data and geological knowledge: part i. Modelling method coupling 3d potential-field interpolation and geological rules. Phys Earth Planet Inter 171(1–4):147–157

    Article  Google Scholar 

  • Caumon G, Gray G, Antoine C, Titeux M-O (2012) Three-dimensional implicit stratigraphic model building from remote sensing data on tetrahedral meshes: theory and application to a regional model of la Popa Basin, Ne Mexico. IEEE Trans Geosci Remote Sens 51(3):1613–1621

    Article  Google Scholar 

  • Chiles J-P, Delfiner P (2009) Geostatistics: modeling spatial uncertainty, vol 497. Wiley, Hoboken

    Google Scholar 

  • Cowan EJ, Beatson RK, Ross HJ, Fright WR, McLennan TJ, Evans TR, Carr JC, Lane RG, Bright DV, Gillman AJ, Oshust PA, Titley M (2003) Practical implicit geological modelling. In: Fifth international mining geology conference. Australian Institute of Mining and Metallurgy Bendigo, Victoria, pp 17–19

  • de la Varga M, Wellmann JF (2016) Structural geologic modeling as an inference problem: a Bayesian perspective. Interpretation 4(3):SM1–SM16

    Article  Google Scholar 

  • Frank T, Tertois A-L, Mallet J-L (2007) 3d-reconstruction of complex geological interfaces from irregularly distributed and noisy point data. Comput Geosci 33(7):932–943

    Article  Google Scholar 

  • Frodeman R (1995) Geological reasoning: geology as an interpretive and historical science. Geol Soc Am Bull 107(8):960–968

    Article  Google Scholar 

  • Garcia M, Cani M-P, Ronfard R, Gout C, Perrenoud C (2018) Automatic generation of geological stories from a single sketch. In: Proceedings of the joint symposium on computational aesthetics and sketch-based interfaces and modeling and non-photorealistic animation and rendering, pp 1–15

  • Grose L, Laurent G, Aillères L, Armit R, Jessell M, Caumon G (2017) Structural data constraints for implicit modeling of folds. J Struct Geol 104:80–92

    Article  Google Scholar 

  • Henrion V, Caumon G, Cherpeau N (2010) Odsim: an object-distance simulation method for conditioning complex natural structures. Math Geosci 42(8):911–924

    Article  Google Scholar 

  • Hillier MJ, Schetselaar EM, de Kemp EA, Perron G (2014) Three-dimensional modelling of geological surfaces using generalized interpolation with radial basis functions. Math Geosci 46(8):931–953

    Article  Google Scholar 

  • Houlding SW (1994) 3d geoscience modeling: computer techniques for geological characterization. Springer Verlag 46(3):85–90

    Google Scholar 

  • Hudleston PJ, Treagus SH (2010) Information from folds: a review. J Struct Geol 32(12):2042–2071

    Article  Google Scholar 

  • Lajaunie C, Courrioux G, Manuel L (1997) Foliation fields and 3d cartography in geology: principles of a method based on potential interpolation. Math Geol 29(4):571–584

    Article  Google Scholar 

  • Laurent G, Ailleres L, Grose L, Caumon G, Jessell M, Armit R (2016) Implicit modeling of folds and overprinting deformation. Earth Planet Sci Lett 456:26–38

    Article  Google Scholar 

  • Mallet J-L (1992) Discrete smooth interpolation in geometric modelling. Comput Aided Des 24(4):178–191

    Article  Google Scholar 

  • Manchuk JG, Deutsch CV (2019) Boundary modeling with moving least squares. Comput Geosci 126:96–106

    Article  Google Scholar 

  • Maxelon M, Renard P, Courrioux G, Brändli M, Mancktelow N (2009) A workflow to facilitate three-dimensional geometrical modelling of complex poly-deformed geological units. Comput Geosci 35(3):644–658

    Article  Google Scholar 

  • McClay KR (2013) The mapping of geological structures. Wiley, Hoboken

    Google Scholar 

  • Ramsay JG, Huber MI (1987) The techniques of modern structural geology: folds and fractures, vol 2. Academic Press, New York

    Google Scholar 

  • Renaudeau J, Malvesin E, Maerten F, Caumon G (2019) Implicit structural modeling by minimization of the bending energy with moving least squares functions. Math Geosci 51(6):693–724

    Article  Google Scholar 

  • Turk G, O’brien JF (2005) Modelling with implicit surfaces that interpolate. In: ACM SIGGRAPH 2005 courses. ACM, p 21

  • Wackernagel H (2013) Multivariate geostatistics: an introduction with applications. Springer, Berlin

    Google Scholar 

  • Wellmann F, Caumon G (2018) 3-d structural geological models: concepts, methods, and uncertainties. Adv Geophys 59:1–121

    Article  Google Scholar 

  • Wendland H (1995) Piecewise polynomial, positive definite and compactly supported radial functions of minimal degree. Adv Comput Math 4(1):389–396

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Gabriel Courrioux of BRGM for being a pioneer of the geomodeler software and for initiating this research work with his colleague, Bernard Bourgine. We are particularly grateful to Christian Lajaunie, inventor of the geostatistical potential field method, who participated extensively in the development of the consecutive derivatives of the covariance function. Support and advice from the Geostatistics Research Group, especially Emilie Chautru for her careful editing, were also helpful, just as the ones from the Geomodeling Team of BRGM. The authors thank Michael Hillier, Guillaume Caumon and the anonymous reviewer fo their fruitful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Laure Pizzella.

Appendices

–Formulation of the Potential Field Method in Terms of Kriging Equations

Whereas the seminal paper by (Lajaunie et al. 1997) adopted a dual (co-)kriging formulation, we retained here the universal (co-)kriging formulation.

The interested reader may consult chapter 3.4 of the book by (Chiles and Delfiner 2009) for greater detail about this derivation.

Equation 2 writes

$$\begin{aligned} \begin{aligned}{}[Z({\mathbf {p}}) - Z({\mathbf {p}}_0)]^*&= \sum ^{{ii'}_n}_{i i'} \lambda _{i i'} [Z(\mathbf {p}_i) - Z(\mathbf {p}_{i'}) ]\\&\quad \quad + \sum ^{g_n}_{g} \lambda _g Z'_g (\mathbf {p}_g) + \sum ^{s_n}_{s} \lambda _s Z''_s (\mathbf {p}_s), \end{aligned} \end{aligned}$$
(A.8)

where the estimate \([~ Z({\mathbf {p}}) - Z({\mathbf {p}}_0) ~]^*\) is a linear combination of: \({ii'}_n\) increment data; \(g_n\) first-order directional derivative of potential data (tangent and/or components of gradient data); \(s_n\) second-order directional derivative of potential data. and where for the sake of simplicity, we also index the directional derivatives Z\(^{\prime }\) and Z\(^{\prime \prime }\) with g and s rather than with directions \(\tau _g\) and \(\tau _s\)

$$\begin{aligned} \begin{aligned}&Z'_g (\mathbf {p_g}) = D_{{{{\varvec{\tau }}_{\mathbf {g}}}}} Z(\mathbf {p_g}), \\&Z''_s (\mathbf {p_s}) = D_{{\varvec{\tau }}_{\mathbf {s}}}^2 {Z(\mathbf {p_s})}. \end{aligned} \end{aligned}$$
(A.9)

Similarly, in the following, in summations of first- and second-order directional derivatives of any function, we will drop the directions and index by g and s as well.

The estimation error \(\epsilon \) is defined by

$$\begin{aligned} \begin{aligned} \epsilon&= [Z({\mathbf {p}}) - Z({\mathbf {p}}_0) ]^* - [Z({\mathbf {p}}) - Z({\mathbf {p}}_0) ]\\&= \sum ^{{ii'}_n}_{i i'} \lambda _{i i'} [Z(\mathbf {p}_i) - Z(\mathbf {p}_{i'}) ]+ \sum ^{g_n}_{g} \lambda _g Z'_g(\mathbf {p}_g) \\&\quad + \sum ^{s_n}_{s} \lambda _s Z''_s(\mathbf {p}_s) - [Z({\mathbf {p}}) - Z({\mathbf {p}}_0) ]. \end{aligned} \end{aligned}$$
(A.10)

1.1 Universality Conditions

Universality conditions ensure that the estimation is unbiased in the sense that the expectation of the estimation error is null, i.e. \(\mathrm {E}( \epsilon ) = 0.\)

Noting that

$$\begin{aligned} \begin{aligned}&\mathrm {E}[Z(\mathbf {p}) - Z({\mathbf {p}}_0) ]= m(\mathbf {p}) = \sum \limits ^{n}_{\ell } \nu _{\ell } ~ f^{\ell }(\mathbf {p}), \\&\mathrm {E}[Z'_g(\mathbf {p}_g) ]= m'_g(\mathbf {p}_g) = \sum \limits ^{n}_{\ell } \nu _{\ell } ~ f_g^{\ell }{'}(\mathbf {p}_g), \\&\mathrm {E}[Z''_s(\mathbf {p}_s) ]= m''_s(\mathbf {p}_s) = \sum \limits ^{n}_{\ell } \nu _{\ell } ~ f_s^{\ell }{''}(\mathbf {p}_s), \end{aligned} \end{aligned}$$
(A.11)

with the same abbreviation logic as previously and \(m'_g(\mathbf {p}_g)\) (resp. \(f_g^{\ell }{'}(\mathbf {p}_g)\)) standing for \(D_{{\varvec{\tau }}_{\mathbf {g}}} m({\mathbf {p}}_g)\) (resp. \(D_{{\varvec{\tau }}_{\mathbf {g}}} f^{\ell }({\mathbf {p}}_g)\)). Then, universality conditions write

$$\begin{aligned} \mathrm {E}[\epsilon ]= & {} ~\mathrm {E}\left[ ~ ~\sum ^{{ii'}_n}_{i i'} ~~ \lambda _{i i'} ~~ [~ Z(\mathbf {p}_i) - Z(\mathbf {p}_{i'}) ~ ]~ \right. \nonumber \\&\left. + \sum ^{g_n}_{g} ~~\lambda _g ~~ Z'_g(\mathbf {p}_g) ~ + \sum ^{s_n}_{s} ~~\lambda _s ~~ Z''_s(\mathbf {p}_s) ~ - [~ Z(\mathbf {p}) - Z({\mathbf {p}}_0) ~]~\right] \nonumber \\= & {} \sum ^{{ii'}_n}_{i i'} ~~ \lambda _{i i'} ~~ [~ \mathrm {E}[Z(\mathbf {p}_i)] - \mathrm {E}[Z(\mathbf {p}_{i'})] ~ ]~ + \sum ^{g_n}_{g} ~~\lambda _g ~~ \mathrm {E}[Z'_g(\mathbf {p}_g)] ~ \nonumber \\&+ \sum ^{s_n}_{s} ~~\lambda _s ~~ \mathrm {E}[Z''_s(\mathbf {p}_s)] ~ - \mathrm {E}[Z(\mathbf {p}) - Z({\mathbf {p}}_0)]\nonumber \\= & {} \sum ^{{ii'}_n}_{i i'} ~~ \lambda _{i i'} ~~ \left[~ \sum \limits ^{n}_{\ell } \nu _{\ell } ~ f^{\ell }(\mathbf {p}_i) - \sum \limits ^{n}_{\ell } \nu _{\ell } ~ f^{\ell }(\mathbf {p}_{i'}) ~ \right]~ \nonumber \\&+ \sum ^{g_n}_{g} ~~\lambda _g ~~ \sum \limits ^{n}_{\ell } \nu _{\ell } ~ f_g^{\ell }{'}(\mathbf {p}_g) ~ + \sum ^{s_n}_{s} ~~\lambda _s ~~ \sum \limits ^{n}_{\ell } \nu _{\ell } ~ f_s^{\ell }{''}(\mathbf {p}_s) \nonumber \\&- ~ \sum \limits ^{n}_{\ell } \nu _{\ell } ~ f^{\ell }(\mathbf {p}) \nonumber \\= & {} \sum \limits ^{n}_{\ell } \nu _{\ell } ~~ \left( ~\sum ^{{ii'}_n}_{i i'} ~ \lambda _{i i'} [f^{\ell }(\mathbf {p}_i) - f^{\ell }(\mathbf {p}_{i'}) ]~\right. \nonumber \\&\left. + ~ \sum ^{g_n}_{g} ~\lambda _g~ f_g^{\ell }{'}(\mathbf {p}_g) ~ + ~\sum ^{s_n}_{s} ~\lambda _s ~f_s^{\ell }{''}(\mathbf {p}_s) - f^{\ell }(\mathbf {p}) ~ \right) \nonumber \\= & {} 0, \end{aligned}$$
(A.12)

which must be true for all values of \(\nu _{\ell }\) and implies n equations (universality constraints), as many as there are drift functions

$$\begin{aligned}&\sum ^{{ii'}_n}_{i i'} \lambda _{i i'} ~ [f^{\ell }(\mathbf {p}_i)~ -~ f^{l}(\mathbf {p}_{i'}) ]~ + ~\sum ^{g_n}_{g} \lambda _g ~ f_g^{\ell }{'}(\mathbf {p}_g) \nonumber \\&\quad ~+~ \sum ^{s_n}_{s} \lambda _s ~ f_s^{\ell }{''}(\mathbf {p}_s)~ -~ f^{\ell }(\mathbf {p}) = ~ 0 \quad \forall \ell \in \{1, \ldots , n\} \end{aligned}$$
(A.13)

denoted \(T^{\ell }(\lambda _{ii'}, \lambda _g, \lambda _s) = 0\) in the following.

1.2 Optimality Conditions

Optimality conditions ensure that \(\mathrm {Var}(\epsilon )\) is minimal.

In this part, the following convention is adopted.

$$\begin{aligned} \begin{aligned}&\mathrm {Cov}(Z(\mathbf {p}_i), ~ Z(\mathbf {p}_j)) = C_{i,j}^{\bullet \bullet }~, \\&\mathrm {Cov}(Z'_g(\mathbf {p}_g), ~ Z(\mathbf {p}_j)) = C_{g,j}^{\bullet ' \bullet }~, \\&\mathrm {Cov}(Z''_s(\mathbf {p}_s), ~ Z'_j(\mathbf {p}_j)) = C_{s,j}^{\bullet '' \bullet '}. \end{aligned} \end{aligned}$$
(A.14)

When double indices are used, the notation will refer to increment values; for example, \(C_{ii',j}^{\bullet \bullet }\) will refer to \(\mathrm {Cov}(Z(\mathbf {p}_i)-Z(\mathbf {p}_{i'}), ~ Z(\mathbf {p}_j))\).

The variance of the estimation error (A.10) is

$$\begin{aligned} \begin{aligned} \mathrm {Var}(\epsilon ) =&~ \mathrm {Var}\left[ ~ \sum ^{{ii'}_n}_{i i'} ~~ \lambda _{i i'} ~~ [~ Z(\mathbf {p}_i) - Z(\mathbf {p}_{i'}) ~ ]~\right. \\&\quad \left. + \sum ^{g_n}_{g} ~~\lambda _g ~~ Z'_g(\mathbf {p}_g) ~ + \sum ^{s_n}_{s} ~~\lambda _s ~~ Z''_s(\mathbf {p}_s) ~ - [~ Z(\mathbf {p}) - Z({\mathbf {p}}_0) ~]~ \right] \\ =&~ \sum _{ii', ~i_{2}i_{3}} ~ \lambda _{i i'} \lambda _{i_{2}i_{3}} ~ C_{ii', ~i_{2}i_{3}}^{\bullet \bullet } ~ \\&\quad + \sum _{g, g'} ~~\lambda _g \lambda _{g'} ~~ C_{g,g'}^{\bullet ' \bullet '}+ \sum _{s, s'} ~~\lambda _s \lambda _{s'} ~~ C_{s,s'}^{\bullet '' \bullet ''} - \mathrm {Var}[Z(\mathbf {p})] \\&+ ~ 2 ~~ \left( ~ \sum _{i i', g} \lambda _{i i'} \lambda _{g} ~ C_{ii', g}^{\bullet \bullet '} + \sum _{i i', s} \lambda _{i i'} \lambda _{s} ~ C_{ii', s}^{\bullet \bullet ''} \right. \\&\quad + \sum _{g, s} \lambda _{g} \lambda _{s} ~ C_{g, s}^{\bullet ' \bullet ''} - \sum _{i i', p} \lambda _{i i'} ~ C_{ii', p}^{\bullet \bullet } \\&\quad \left. - \sum _{g, p} \lambda _g ~ C_{g, p}^{\bullet ' \bullet } - \sum _{s, p} \lambda _s ~ C_{s, p}^{\bullet '' \bullet } ~\right) . \end{aligned} \end{aligned}$$
(A.15)

Lagrange multipliers are used to minimize the quadratic form \(\mathrm {Var}[\epsilon (\lambda _{ii'}, \lambda _g, \lambda _s)]\) under the constraint that all universality conditions hold, i.e. \(T^\ell (\lambda _{ii'}, \lambda _g, \lambda _s) = 0\) for all \(\ell \).

This implies that

$$\begin{aligned} \mathrm {Var}[\epsilon (\lambda _{ii'}, \lambda _g, \lambda _s) ] + 2 ~ \sum _l \mu _\ell ~ T^\ell (\lambda _{ii'}, \lambda _g, \lambda _s) \end{aligned}$$
(A.16)

should have its partial derivatives with respect to each \(\lambda _{ii'}, \lambda _g, \lambda _s\) and \(\ell \) Lagrangian multipliers \(\mu _\ell \) to be null.

The Lagrangian L is then given by

$$\begin{aligned} \begin{aligned} L(\lambda _{ii'}, \lambda _g, \lambda _s, \mu _\ell ) =&\sum _{ii', i_{2}i_{3}} \lambda _{i i'} \lambda _{i_{2}i_{3}} ~ C_{ii', i_{2}i_{3}}^{\bullet \bullet } + \sum _{g,g'} \lambda _g \lambda _{g'} ~ C_{g,g'}^{\bullet ' \bullet '} + \sum _{s,s'} \lambda _s \lambda _{s'} ~ C_{s,s'}^{\bullet ''\bullet ''} \\&+ 2 ~ \left( ~ \sum _{ii', g} \lambda _{ii'} \lambda _g ~ C_{ii', g}^{\bullet \bullet '} + ~ \sum _{ii', s} \lambda _{ii'} \lambda _{s} ~ C_{ii', s}^{\bullet \bullet ''} + ~ \sum _{g,s} \lambda _g \lambda _s ~ C_{g,s}^{\bullet '\bullet ''} ~ \right) \\&- 2 ~ \left( ~ \sum _{ii'} \lambda _{ii'} ~ C_{p,ii'}^{\bullet \bullet } + ~ \sum _{g} \lambda _{g}~ C_{p,g}^{\bullet \bullet '} + ~ \sum _{s} \lambda _s ~ C_{p,s}^{\bullet \bullet ''} ~ \right) \\&- 2~ \sum \limits _{\ell }~ \mu _{\ell } ~ T^\ell (\lambda _{ii'}, \lambda _g, \lambda _s). \end{aligned}\nonumber \\ \end{aligned}$$
(A.17)

Deriving this Lagrangian according to \(\lambda _{ii'}\), \(\lambda _{g}\), \(\lambda _{s}\) and \(\mu _\ell \) gives

$$\begin{aligned} \left\{ \begin{array}{ll} \sum \limits _{i_{2}i_{3}} \lambda _{i_{2}i_{3}} ~ C_{ii', i_{2}i_{3}}^{\bullet \bullet } + \sum \limits _{g} \lambda _g ~ C_{ii',g}^{\bullet \bullet '} + \sum \limits _{s} \lambda _{s} ~ C_{ii',s}^{\bullet \bullet ''} + \sum \limits _{\ell } ~ \mu _{\ell } ~ [f^{\ell }(\mathbf {p}_i) - f^{\ell }(\mathbf {p}_{i'}) ]= C_{p, ii'}^{\bullet \bullet } \quad \quad \forall ~(i,i') \\ \sum \limits _{g'} \lambda _{g'} ~ C_{g,g'}^{\bullet ' \bullet '} + \sum \limits _{ii'} \lambda _{ii'} ~ C_{ii',g}^{\bullet \bullet '} + \sum \limits _{s} \lambda _s ~ C_{g,s}^{\bullet '\bullet ''} + \sum \limits _{\ell } \mu _{\ell } ~ f_g^{\ell }{'}(\mathbf {p}_g) = C_{p,g}^{\bullet \bullet '} \quad \quad \forall ~ g \\ \sum \limits _{s'} \lambda _{s'} ~ C_{s,s'}^{\bullet ''\bullet ''} + \sum \limits _{ii'} \lambda _{ii'} ~ C_{ii',s}^{\bullet \bullet ''} + \sum \limits _{g} \lambda _g ~ C_{g,s}^{\bullet '\bullet ''} + \sum \limits _{\ell } \mu _{\ell } ~ f_s^{\ell }{''}(\mathbf {p}_s) = C_{p,s}^{\bullet \bullet ''} \quad \quad \forall ~ s \\ \sum \limits _{i i'} \lambda _{i i'} ~ [f^{\ell }(\mathbf {p}_i)- f^{\ell }(\mathbf {p}_{i'}) ]+ \sum \limits _{g} \lambda _g ~ f_g^{\ell }{'} (\mathbf {p}_g) + \sum \limits _{s} \lambda _s ~ f_s^{\ell }{''} (\mathbf {p}_s) = f^\ell (\mathbf {p}) \quad \quad \forall ~ \ell \end{array} \right. \end{aligned}$$
(A.18)

which defines kriging equations and writes in matrix form

$$\begin{aligned} \begin{pmatrix}K&{}F\\ {}^t F &{}0\\ \end{pmatrix} \begin{pmatrix}\lambda \\ \mu \end{pmatrix} = \begin{pmatrix}K_{p} \\ F_{p} \end{pmatrix} \end{aligned}$$
(A.19)

Denoting \(N = (ii'_n + g_n + s_n)\) as the number of data points and p, the number of points to estimate

K is \(N \times N\), \(K_p\) is \(N \times p\), F is \(N \times n\), \(F_p\) is \(n \times p\) and 0 is a block zero matrix.

$$\begin{aligned} \begin{aligned} K&= \begin{pmatrix} K_{ii', i_{2}i_{3}}^{\bullet \bullet } &{} K_{ii',g}^{\bullet \bullet '} &{} K_{ii',s}^{\bullet \bullet ''} \\ K_{g, ii'}^{\bullet ' \bullet } &{} K_{g,g'}^{\bullet ' \bullet '} &{} K_{g,s}^{\bullet ' \bullet ''} \\ K_{s, ii'}^{\bullet \bullet ''} &{} K_{s,g}^{\bullet ''\bullet '} &{} K_{s,s'}^{\bullet ''\bullet ''} \end{pmatrix} ~~ K_p = \begin{pmatrix} K_{ii', p}^{\bullet \bullet } \\ K_{g, p}^{\bullet ' \bullet } \\ K_{s, p}^{\bullet ''\bullet } \\ \end{pmatrix} ~~ F_p = \begin{pmatrix} f^{1}(\mathbf {p}_1) &{} \cdots &{} f^{1}(\mathbf {p}_p) \\ &{} \cdots &{} \\ f^{n}(\mathbf {p}_1) &{} \cdots &{} f^{n}(\mathbf {p}_p)\\ \end{pmatrix} \\ F&= \begin{pmatrix} f^{1}(\mathbf {p}_i) - f^{1}(\mathbf {p}_{i'}) &{} \cdots &{} f^{n}(\mathbf {p}_i) - f^{n}(\mathbf {p}_{i'}) \\ f_g^{1'}(\mathbf {p}_g) &{} \cdots &{} f_g^{n'}(\mathbf {p}_g) \\ f_s^{1''}(\mathbf {p}_s) &{} \cdots &{} f_s^{n''}(\mathbf {p}_s) \\ \end{pmatrix} \end{aligned} \end{aligned}$$
(A.20)

with the same abbreviation logic \(f_g^{\ell }{'}(\mathbf {p}_g) = D_{{\varvec{\tau }}_{\mathbf {g}}} f^{\ell }(\mathbf {p}_g) \) and \(f_s^{\ell }{''}(\mathbf {p}_s) = D_{{\varvec{\tau }}_{\mathbf {s}}}^2 {f^{\ell }(\mathbf {p}_s)}\).

Appendix B–Covariance Function Derivation Developments

1.1 Definitions

Covariance function of Z between point \({\mathbf {p}}\) and point \({\mathbf {q}}\)

$$\begin{aligned} ({\mathbf {p}},{\mathbf {q}}) \mapsto \mathrm {Cov}(Z({\mathbf {p}}), Z({\mathbf {q}})) \quad \quad \mathbb {R}^3 \times \mathbb {R}^3 \rightarrow \mathbb {R}. \end{aligned}$$
(B.21)

Stationarity case of order two assumption

$$\begin{aligned} \mathrm {Cov}(Z({\mathbf {p}}), Z({\mathbf {q}})) = C_Z({\mathbf {p}}-{\mathbf {q}}) = C_Z ( {\mathbf {h}}_{pq} ), \end{aligned}$$
(B.22)

with

$$\begin{aligned} {\mathbf {h}}_{pq} \mapsto C_Z ( {\mathbf {h}}_{pq} ) \quad \quad \mathbb {R}^3 \rightarrow \mathbb {R}. \end{aligned}$$
(B.23)

Case of constant anisotropic distance (global anisotropy)

$$\begin{aligned} C_Z ( {\mathbf {h}}_{pq} ) = K(r({\mathbf {h}}_{pq})) , \end{aligned}$$
(B.24)

with

$$\begin{aligned}&K \quad \quad \mathbb {R}\rightarrow \mathbb {R}\quad \quad \text {the chosen covariance model function}, \end{aligned}$$
(B.25)
$$\begin{aligned}&r({\mathbf {h}}_{pq}) = \Vert {\mathbf {h}}_{pq} \Vert _A = \sqrt{ ({\mathbf {p}}- {\mathbf {q}})^t ~ A ~ ({\mathbf {p}} - {\mathbf {q}}) } \quad \quad \mathbb {R}^3 \rightarrow \mathbb {R}\quad \quad \text {the anisotropic norm},\nonumber \\ \end{aligned}$$
(B.26)

where A is a fixed rotation/dilatation matrix of \(\mathbb {R}^3\).

In the following, \(r({\mathbf {h}}_{pq})\) may be denoted as r for readability.

1.2 Notations

1.2.1 Differentiation with Respect to Variable \(\mathbf {p}\) or \(\mathbf {q}\)

Although by assumption the covariance function of Z between points \({\mathbf {p}}\) and \({\mathbf {q}}\) only depends on the anisotropic distance r between them, in order to compute the partial derivatives needed in our model, we still have to consider it as a function of two variables \({\mathbf {p}}\) and \({\mathbf {q}}\) in \(\mathbb {R}^3\). In the following, we will need to differentiate with respect to either of them, which we will denote with a superscript indicating the variable we differentiate upon.

For instance, for a function \(F: \mathbb {R}^3 \times \mathbb {R}^3 \rightarrow \mathbb {R}\), we note the partial derivative with respect to \({\mathbf {p}}\) along the vector \({\varvec{\tau }}\) as

$$\begin{aligned} D^p_{\varvec{\tau }} F(\mathbf {p},\mathbf {q}) = \lim \limits _{h \rightarrow 0} \frac{1}{h} \left( F({\mathbf {p}} + h {\varvec{\tau }}, {\mathbf {q}}) - F({\mathbf {p}} , {\mathbf {q}}) \right) . \end{aligned}$$

Denoting \(({\mathbf {e}}_x, {\mathbf {e}}_y, {\mathbf {e}}_z)\) as the standard basis of \(\mathbb {R}^3\), we can then define the gradient with respect to \(\mathbf {p}\) as

$$\begin{aligned} \nabla ^p F(\mathbf {p}, \mathbf {q}) = \begin{pmatrix} D_{{\mathbf {e}}_x}^p ~ F(\mathbf {p},\mathbf {q}) \\ D_{{\mathbf {e}}_y}^p ~ F(\mathbf {p},\mathbf {q}) \\ D_{{\mathbf {e}}_z}^p ~ F(\mathbf {p},\mathbf {q}) \end{pmatrix}. \end{aligned}$$

By differentiating this with respect to \(\mathbf {q}\), we can define \(D^{p,q}F(\mathbf {p},\mathbf {q})\), and by further differentiating with respect to one or the other variable, we can define higher-order differentials, like \(D^{p,p,q} F(\mathbf {p},\mathbf {q})\) or \(D^{p,p,q,q} F(\mathbf {p},\mathbf {q})\).

The dimensions of these quantities must be kept in mind in the following equations: \(D^p_{\varvec{\tau }} F(\mathbf {p},\mathbf {q})\) is a scalar, \(\nabla ^p F(\mathbf {p}, \mathbf {q})\) is a 3D vector, \(D^{p,q} F(\mathbf {p}, \mathbf {q})\) a 3 by 3 matrix, \(D^{p,p,q} F(\mathbf {p}, \mathbf {q})\) is a 3x3x3 third-order tensor, and \(\nabla ^{p,p,q,q} F(\mathbf {p},\mathbf {q})\) a 3x3x3x3 fourth-order tensor.

Successive derivations of the covariance model K, which is a scalar function, are classically denoted by \(K'\) for the first derivative and by \(K^{(n)}\) for the nth derivative.

1.2.2 Antisymmetry Property

In the particular case where F can be written as a function of \(\mathbf {p}- \mathbf {q}\), i.e. there exists a function \(G : \mathbb {R}^3 \rightarrow \mathbb {R}\) such that \(F(\mathbf {p},\mathbf {q}) = G(\mathbf {p}- \mathbf {q})\), differentiating with respect to \(\mathbf {p}\) is the same as differentiating with respect to \(\mathbf {q}\) and changing the sign of the result.

In the present case, under our second-order stationary assumption, the covariance function of Z and the anisotropic distance satisfy this antisymmetry property, so we can substitute \(\mathbf {p}\) for \(\mathbf {q}\) in their differentials simply by changing the sign. The final sign of the substituted covariance function will then depend on the parity of the number of substitutions as follows: \( (-1)^n ~ D^{p, p, \ldots } C_Z({\mathbf {h}}_{pq}) = D^{q, q, \ldots } C_Z({\mathbf {h}}_{pq}) \) where n is the number of substitutions.

1.3 First Derivative of the Covariance Function

Directional derivative at point \({\mathbf {p}}\) according to \({\mathbf {e}}_x\) is

$$\begin{aligned} \begin{aligned} \mathrm {Cov}(D_{{\mathbf {e}}_x}Z(\mathbf {p}), ~ Z({\mathbf {q}}))&= D_{{\mathbf {e}}_x}^p ~ \mathrm {Cov}(Z({\mathbf {p}}), Z({\mathbf {q}})) \\&= \mathrm {Cov}( \lim \limits _{\begin{array}{c} \alpha \rightarrow 0 \end{array}} \frac{Z({\mathbf {p}} + \alpha ~ {\mathbf {e}}_x) - Z({\mathbf {p}} ) }{ \alpha }, ~ Z({\mathbf {q}}) ) \\&= \lim \limits _{\begin{array}{c} \alpha \rightarrow 0 \end{array} } ~ \frac{1}{\alpha } ~ [\mathrm {Cov}( Z({\mathbf {p}} + \alpha ~ {\mathbf {e}}_x) ,~ Z({\mathbf {q}})) - \mathrm {Cov}( Z({\mathbf {p}}) ,~ Z({\mathbf {q}}) ) ]\\&= \lim \limits _{\begin{array}{c} \alpha \rightarrow 0 \end{array} } ~ \frac{1}{\alpha } ~ [C_Z ( {\mathbf {h}}_{pq} + \alpha {\mathbf {e}}_x) - C_Z ({\mathbf {h}}_{pq}) ]= D_{{\mathbf {e}}_x}^p ~ C_Z ( {\mathbf {h}}_{pq} ) \\&= \lim \limits _{\begin{array}{c} \alpha \rightarrow 0 \end{array} } ~ \frac{1}{\alpha } ~ [K ( r( {\mathbf {h}}_{pq} + \alpha {\mathbf {e}}_x ) ) - K (r ({\mathbf {h}}_{pq})) ]= D_{{\mathbf {e}}_x}^p ~ K ( r ( {\mathbf {h}}_{pq} )). \end{aligned} \end{aligned}$$
(B.27)

Thus, we obtain

$$\begin{aligned} D_{{\mathbf {e}}_x}^p ~ \mathrm {Cov}(Z({\mathbf {p}}), Z({\mathbf {q}})) = D_{{\mathbf {e}}_x}^p ~ C_Z ( {\mathbf {h}}_{pq} ) = D_{{\mathbf {e}}_x}^p ~ K ( r ( {\mathbf {h}}_{pq} )). \end{aligned}$$
(B.28)

The gradient of \(C_Z\) with respect to \(\mathbf {p}\) is

$$\begin{aligned} \nabla ^p C_Z ({\mathbf {h}}_{pq}) = \begin{pmatrix} D_{{\mathbf {e}}_x}^p ~ C_Z ( {\mathbf {h}}_{pq} ) \\ D_{{\mathbf {e}}_y}^p ~ C_Z ( {\mathbf {h}}_{pq} ) \\ D_{{\mathbf {e}}_z}^p ~ C_Z ( {\mathbf {h}}_{pq} ) \\ \end{pmatrix}. \end{aligned}$$
(B.29)

First derivative specification of \(C_Z\) with respect to \({\mathbf {p}}\)

$$\begin{aligned} \boxed { \nabla ^p ~ C_Z ( {\mathbf {h}}_{pq} ) = \nabla ^p r({\mathbf {h}}_{pq}) ~. ~ K'(r({\mathbf {h}}_{pq}))}, \end{aligned}$$
(B.30)

with

$$\begin{aligned} \nabla ^p r({\mathbf {h}}_{pq}) = \frac{A ~ {\mathbf {h}}_{pq}}{r({\mathbf {h}}_{pq})}. \end{aligned}$$
(B.31)

Let us note again that

$$\begin{aligned} \nabla ^p r({\mathbf {h}}_{pq}) = - \nabla ^q r({\mathbf {h}}_{pq}). \end{aligned}$$
(B.32)

Equation B.30 leads to two conditions

  • K should be differentiable,

  • For \(\nabla ^p ~ C_Z ( {\mathbf {h}}_{pq} )\) to be defined where \(\mathbf {p}=\mathbf {q}\), we need \(K'(0) = 0\).

1.3.1 Generalization: First-Order Derivative of the Covariance Function in a Specific Direction

$$\begin{aligned} \begin{aligned} \mathrm {Cov}(D_{\tau } ~ Z({\mathbf {p}}), ~ Z({\mathbf {q}}))&= \mathrm {Cov}({\varvec{\tau }}_p~ . \nabla Z ( {\mathbf {p}}) , ~ Z({\mathbf {q}})) \\&= {\varvec{\tau }}_p ~ . \nabla ^p ~ \mathrm {Cov}( Z ( {\mathbf {p}}) , Z({\mathbf {q}}) )\\&= {\varvec{\tau }}_p ~ . ~ \nabla ^p ~ C_Z ( {\mathbf {h}}_{pq} ) . \end{aligned} \end{aligned}$$
(B.33)

1.4 Second-Order Derivative of the Covariance Function

Covariance of partial derivatives w.r.t. \({\mathbf {p}}\) according to \({\mathbf {e}}_x\) and w.r.t. \({\mathbf {q}}\) according to \({\mathbf {e}}_y\) is

$$\begin{aligned} \mathrm {Cov}( D_{{\mathbf {e}}_x} Z(\mathbf {p}), D_{{\mathbf {e}}_x} Z(\mathbf {q})) = ~~ ^t{\mathbf {e}}_x . D^{p,q}~ C_Z ( {\mathbf {h}}_{pq} ) . {\mathbf {e}}_y. \end{aligned}$$
(B.34)

Second-order derivative specification of \(C_Z\) with respect to \({\mathbf {p}}\) then \({\mathbf {q}}\)

$$\begin{aligned} \boxed { D^{p,q}~ C_Z ( {\mathbf {h}}_{pq} ) = K^{(2)}(r({\mathbf {h}}_{pq})) ~~ \nabla ^{q}r({\mathbf {h}}_{pq}) . ^{t}\nabla ^{p}r({\mathbf {h}}_{pq}) ~ + ~ K'(r({\mathbf {h}}_{pq})) ~~ D^{p,q} r({\mathbf {h}}_{pq}),}\nonumber \\ \end{aligned}$$
(B.35)

with

$$\begin{aligned} D^{p,q}~ r({\mathbf {h}}_{pq}) = - \frac{ A }{r({\mathbf {h}}_{pq})} + \frac{A ~ {\mathbf {h}}_{pq} ~~~ ^{t}(A ~ {\mathbf {h}}_{pq})}{r^3({\mathbf {h}}_{pq})} \quad \quad \in {\mathcal {M}}_3(\mathbb {R}). \end{aligned}$$
(B.36)

Let us note again that

$$\begin{aligned} D^{p,q}~ C_Z ( {\mathbf {h}}_{pq} ) = - D^{q,q}~ C_Z ( {\mathbf {h}}_{pq} ) = - D^{p,p}~ C_Z ( {\mathbf {h}}_{pq} ) . \end{aligned}$$
(B.37)

Equation B.35 leads to two conditions

  • K should be twice differentiable,

  • For \(D^{p,q}~ C_Z ( {\mathbf {h}}_{pq} )\) to be defined at \({\mathbf {h}}_{pq} = 0\), we need \(K'(0) = 0\).

1.4.1 Second-Order Derivative of the Covariance Function in a Specific Direction

$$\begin{aligned} \begin{aligned} \mathrm {Cov}(D^2_{\tau } ~ Z({\mathbf {p}}) , ~ Z({\mathbf {q}}))&= \mathrm {Cov}(~^t{\varvec{\tau }}~ . D^{2} Z ( {\mathbf {p}}) . {\varvec{\tau }} , ~ Z({\mathbf {q}})) \quad \quad \mathbb {R}^3 \times \mathbb {R}^3 \rightarrow \mathbb {R}\\&= ~^t{\varvec{\tau }} . D^{p,p}~ C_Z ( {\mathbf {h}}_{pq} ) . {\varvec{\tau }} \\&= \sum _i \sum _j {\varvec{\tau }}[i] ~ {\varvec{\tau }}[j]~ D^{p,p} ~ C_Z ({\mathbf {h}}_{pq})[i,j] . \end{aligned} \end{aligned}$$
(B.38)

1.5 Third-Order Derivative of the Covariance Function

$$\begin{aligned} \begin{aligned} \mathrm {Cov}(D^2_{\tau } ~ Z({\mathbf {p}}) , D_{\mu } Z({\mathbf {q}}) )&= \mathrm {Cov}(~^t{\varvec{\tau }}~ . D^{2} Z ( {\mathbf {p}}) . {\varvec{\tau }} ~ , ~ ~^{t}\varvec{\mu } ~ . \nabla Z( {\mathbf {q}}) )\\&= \sum _i \sum _j \sum _k {\varvec{\tau }}[i] ~ {\varvec{\tau }}[j] ~ \varvec{\mu }[k] ~ D^{p,p,q} ~ C_Z ({\mathbf {h}}_{pq})[i,j,k].\end{aligned} \end{aligned}$$
(B.39)

Recalling that

$$\begin{aligned} D^{p,p,q} ~ C_Z ({\mathbf {h}}_{pq}) = - D^{p,p,p} ~ C_Z ({\mathbf {h}}_{pq}), \end{aligned}$$
(B.40)

where \(D^{p,p,p} ~ C_Z ({\mathbf {h}}_{pq}) \) is a third-order tensor (\(3\times 3\times 3\)) given by

$$\begin{aligned} \boxed { \begin{aligned} D^{p,p,p} ~ C_Z ({\mathbf {h}}_{pq}) [i,j,k] = ~~&K'(r) ~~ D^{p, p, p}r[i,j,k]~~ \\ +&~ K^{(2)}(r) ~~ ( \nabla ^p r [i] D^{p,p}[j,k] + \nabla ^p r [j] D^{p,p}[i,k] + \nabla ^p r [k] D^{p,p}[i,j] ) ~~\\ +&~ K^{(3)}(r) ~~ (\nabla ^{p}r [i] \nabla ^{p}r [j] \nabla ^{p}r [k]). ~~ \end{aligned} }\nonumber \\ \end{aligned}$$
(B.41)

The third-order derivative of r is also a third-order tensor (\(3\times 3\times 3\)) given by

$$\begin{aligned} \begin{aligned} D^{p,p,p}r({\mathbf {h}}_{pq})[i,j,k] = -&\frac{1}{r^3} ~~ [ A_{i,j} \mathbf {t}_k + A_{i,k}\mathbf {t}_j + A_{j,k} \mathbf {t}_i ] \\ +&~ \frac{3}{r^5} ~~ [ \mathbf {t}_i ~~\mathbf {t}_j ~~ \mathbf {t}_k ], \end{aligned} \end{aligned}$$
(B.42)

where \(\mathbf {t}= A {\mathbf {h}}_{p,q}\).

1.6 Fourth-Order Derivative of the Covariance Function

$$\begin{aligned} \begin{aligned} \mathrm {Cov}(D^2_{\tau } ~ Z({\mathbf {p}}) , D^2_{\mu } Z({\mathbf {q}}) )&= \mathrm {Cov}(~^t{\varvec{\tau }}~ . D^{2} Z ( {\mathbf {p}}) . {\varvec{\tau }} ~ , ~ ~^{t}\varvec{\mu } ~ . D^2 Z( {\mathbf {q}}) . \mu )\\&= \sum _i \sum _j \sum _k \sum _l {\varvec{\tau }}[i] ~ {\varvec{\tau }}[j] ~ \varvec{\mu }[k] ~ \mu [l] ~ D^{p,p,q,q} ~ C_Z ({\mathbf {h}}_{pq})[i,j,k,l].\end{aligned}\nonumber \\ \end{aligned}$$
(B.43)

Recalling that

$$\begin{aligned} D^{p,p,q,q} ~ C_Z ({\mathbf {h}}_{pq}) = D^{p,p,p,p} ~ C_Z ({\mathbf {h}}_{pq}), \end{aligned}$$
(B.44)

\(D^{p,p,p,p} ~ C_Z ({\mathbf {h}}_{pq}) \) is a \(3\times 3\times 3\times 3\) fourth-order tensor

$$\begin{aligned} \boxed { D^{p,p,p,p} ~C_Z ( {\mathbf {h}}_{pq} ) = K'(r)~ M_1 + K^{(2)}(r) ~M_2 + K^{(3)}(r) ~M_3 + K^{(4)}(r) ~M_4, }\nonumber \\ \end{aligned}$$
(B.45)

with

$$\begin{aligned} \begin{aligned}&M_1 = D^{p, p, p, p}r \\&M_2 [i,j,k,l] = D^{p, p}r [i,j] ~~ D^{p, p}r [k,l] + D^{p,p}r [i,k] ~~ D^{p,p}r [j,l] \\&\quad + D^{p,p}r [i,l] ~~ D^{p,p}r [j,k] \\&~~ ~~ + \nabla ^{p}r [i] D^{p,p,p}r [j,k,l] + \nabla ^{p}r [j] D^{p,p,p}r[i,k,l] \\&\quad + \nabla ^{p}r [k] D^{p,p,p}r[i,j,l] + \nabla ^{p}r[l] D^{p,p,p}r[i,j,k] \\&M_3[i,j,k,l] = D^{p, p}r[i,j] ~ \nabla ^{p}r[k] ~ \nabla ^{p}r[l] + D^{p,p}r[i,k] ~ \nabla ^{p}r[j] ~ \nabla ^{p}r[l] \\&\quad + D^{p, p}r[i,l] ~ \nabla ^{p}r[j] ~ \nabla ^{p}r[k] \\&~~ ~~ +D^{p,p}r[j,k] ~ \nabla ^{p}r[i] ~ \nabla ^{p}r[l] \\&\quad + D^{p, p}r[j,l] ~ \nabla ^{p}r[i] ~ \nabla ^{p}r[k] + D^{p, p}r[k,l] ~ \nabla ^{p}r[i] ~ \nabla ^{p}r[j]\\&M_4 [i,j,k,l]= \nabla ^{p}r[i] ~~ \nabla ^{p}r[j] ~~ \nabla ^{p}r[k] ~~ \nabla ^{p}r[l] \end{aligned} \end{aligned}$$
(B.46)

and

$$\begin{aligned} D^{p,p,p,p} ~r ( {\mathbf {h}}_{pq} ) [i,j,k,l]= & {} - ~ \frac{1}{r^3} (A_{ij}A_{kl} + A_{ik}A_{jl} + A_{il}A_{jk} )\nonumber \\&+ ~ \frac{3}{r^5} (A_{ij} A_{kl} ~\mathbf {t}_k ~\mathbf {t}_l+ A_{ik} A_{jl} ~\mathbf {t}_j ~\mathbf {t}_l+ ... + A_{kl} A_{ij} ~\mathbf {t}_i ~ \mathbf {t}_j ) \nonumber \\&- ~ \frac{15}{r^7} ( \mathbf {t}_i ~~ \mathbf {t}_j ~~ \mathbf {t}_k ~~ \mathbf {t}_l), \end{aligned}$$
(B.47)

where \(\mathbf {t}= A {\mathbf {h}}_{p,q}\).

–Demonstration of Null Second-Order Derivative Along the Hinge Line

If we consider a twice-differentiable parametric curve \(\gamma : t \mapsto \begin{pmatrix} x(t)\\ y(t)\\ z(t) \end{pmatrix} \in \mathbb {R}^3\) and define \(g: t \mapsto Z(\gamma (t))\), where Z is our potential field (\(Z: \mathbb {R}^3 \rightarrow \mathbb {R}\)), we have in all generality

$$\begin{aligned} g'(t) = \nabla Z(\gamma (t) ) . \overrightarrow{\gamma '(t)} \end{aligned}$$
(C.48)

and

$$\begin{aligned} g'' (t) = \nabla Z(\gamma (t) ) . \overrightarrow{\gamma ''(t)} + ^t{\overrightarrow{\gamma '(t)}} ~.~ D^2 Z(\gamma (t) ) ~.~ \overrightarrow{\gamma '(t)}. \end{aligned}$$
(C.49)

Let us assume that \(\gamma \) is a parametrization of a planar curve included in an isopotential of Z, and that furthermore there exists a plane that is tangent to this isopotential along the curve defined by \(\gamma \). This is the case, for instance, for any parametrization of the \(H_3\) curve of Fig. 1.

Under this assumption, g is constant; therefore in particular \(g'' = 0\). Since \(\overrightarrow{\gamma ''(t)}\) is in the same plane as \(\gamma \) (plane \(P_t\) in the case of Fig. 1), that is tangent to the isopotential, it is orthogonal to the gradient \(\nabla Z(\gamma (t) )\).

Equation C.49 then reduces to \({}^t {\overrightarrow{\gamma '(t)}} \nabla ^2 Z(\gamma (t) ) \overrightarrow{\gamma '(t)} = 0 \), i.e. \(\overrightarrow{\gamma '(t)} \in Ker( D^2 Z(\gamma (t) ) )\). In other words, the directional second derivative of Z is null along \(\overrightarrow{\gamma '(t)}\), which is the direction of the crest/hinge line \(H_3\) in the particular case of Fig. 1.

Volume Error Indicator

1.1 Volume Error (\(V_e\)) Computation

The indicator \(V_e\) provides a quantitative measure of the ability of the model to reconstruct the structure of interest of our synthetic example. It can be thought of as the volume between the two surfaces near the fold.

Let us denote by z(xy) and \({\hat{z}} (x,y)\) the respective elevations of the theoretical surface and the modeled surface at location (xy). The isopotential considered in this work, described in Sect. 3, has a constant elevation of \(z_{floor} = 0.6\) on a large part of the domain, but no information is provided to any model in this zone. In order not to penalize errors made far from the region of interest, instead of considering \({\hat{z}}\), we consider \(\min ({\hat{z}},z_{floor})\) when calculating the volume

$$\begin{aligned} V_e = \Vert z - \min ({\hat{z}}, z_{floor}) \Vert _1 = \int \vert z(x,y) - \min ({\hat{z}}(x,y),z_{floor}) \vert \end{aligned}$$

1.2 Uncertainty (\(\varDelta V\)) on this Computation

In practice, the interpolated value of the potential \(Z^* (x,y,z)\) is known only on the grid points \((x_i,y_j,z_k)\); at each location \((x_i,y_j)\), the elevation \({\hat{z}} (x_i,y_j)\) is estimated by taking \(z_k\) such that \(Z^*(x_i,y_j,z_k)\) is closest to 2.5, which is the value of the isosurface considered here. This leads to an estimation uncertainty \(\varDelta V\) that depends on the vertical resolution (the finer the grid, the lower this uncertainty). For each model, this uncertainty is computed along with the volume error indicator (it is almost constant but may vary a little, because this uncertainty is zero for locations where \({\hat{z}} \le z_{floor}\), since we use the value \(z_{floor}\) anyway). Several models are close in performance in terms of this indicator; this uncertainty needs to be taken into account when comparing them.

1.3 Relative Volume Improvement (RVI, \(\eta V_e\))

S1 : Scenario 1; S2 : Scenario 2

\(V_e\): Volume error (\(u^3\))

\(V_{e_1}\) : Volume error of initial result of Scenario 1 (\(u^3\))

\(V_{e_2}\): Volume error of initial result of Scenario 2 (\(u^3\))

where u is the arbitrary length unit used here.

\(\eta V_e\) (RVI): Signed relative difference of Ve compared to initial result in the same scenario (\(\%\))

$$\begin{aligned} \eta V_e = \frac{V_{e_i} - V_e}{V_{e_i}} \quad \quad i \in (1,2) \end{aligned}$$

1.4 Results

This part summarizes all the intermediate computations for the results concerning the volume error of the different investigations in Sect. 4.

figure a

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pizzella, L., Alais, R., Lopez, S. et al. Taking Better Advantage of Fold Axis Data to Characterize Anisotropy of Complex Folded Structures in the Implicit Modeling Framework. Math Geosci 54, 95–130 (2022). https://doi.org/10.1007/s11004-021-09950-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11004-021-09950-0

Keywords

Navigation