Skip to main content
Log in

Single-Index Importance Sampling with Stratification

  • Published:
Methodology and Computing in Applied Probability Aims and scope Submit manuscript

Abstract

In many stochastic problems, the output of interest depends on an input random vector mainly through a single random variable (or index) via an appropriate univariate transformation of the input. We exploit this feature by proposing an importance sampling method that makes rare events more likely by changing the distribution of the chosen index. Further variance reduction is guaranteed by combining this single-index importance sampling approach with stratified sampling. The dimension-reduction effect of single-index importance sampling also enhances the effectiveness of quasi-Monte Carlo methods. The proposed method applies to a wide range of financial or risk management problems. We demonstrate its efficiency for estimating large loss probabilities of a credit portfolio under a normal and t-copula model and show that our method outperforms the current standard for these problems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Data Availability

All numerical examples presented in this paper can be reproduced with an R script available from the corresponding author upon request.

References

  • Adragni K, Cook R (2009) Sufficient dimension reduction and prediction in regression. Phil Trans Math Phys Eng Sci 367(1906):4385–4405

    MathSciNet  MATH  Google Scholar 

  • Arbenz P, Cambou M, Hofert M, Lemieux C, Taniguchi Y (2018) Importance sampling and stratification for copula models. Contemporary Computational Mathematics–a celebration of the 80th birthday of Ian Sloan. Springer

  • Asmussen S, Glynn P (2007) Stochastic Simulation: Algorithms and Analysis. Springer, Berlin

    Book  MATH  Google Scholar 

  • Au S, Beck J (2003) Important sampling in high dimensions. Struct Saf 25(2):139–163

    Article  Google Scholar 

  • Bassamboo A, Juneja S, Zeevi A (2008) Portfolio credit risk with extremal dependence: Asymptotic analysis and efficient simulation. Oper Res 56(3):593–606

    Article  MathSciNet  MATH  Google Scholar 

  • Caflisch R, Morokoff W, Owen A (1997) Valuation of Mortgage Backed Securities Using Brownian Bridges to Reduce Effective Dimension. Department of Mathematics, University of California, Los Angeles. https://doi.org/10.21314/JCF.1997.005

  • Cambou M, Hofert M, Lemieux C (2016) Quasi-random numbers for copula models. Stat Comput 27(5):1307–1329. https://doi.org/10.1007/s11222-016-9688-4

    Article  MathSciNet  MATH  Google Scholar 

  • Chan J, Kroese D (2010) Efficient estimation of large portfolio loss probabilities in t-copula models. European Journal of Oparerational Research 205(2):361–367

    Article  MathSciNet  MATH  Google Scholar 

  • Cook R (1998) Regression Graphics. Wiley, New York

    Book  MATH  Google Scholar 

  • Cook R, Forzani L (2009) Likelihood-based sufficient dimension reduction. J Am Stat Assoc 104(485):197–208

    Article  MathSciNet  MATH  Google Scholar 

  • Cochran W (2005) Sampling Techniques, 3rd edn. Wiley, New York

    MATH  Google Scholar 

  • De Boer P, Kroese D, Mannor S, Rubinstein R (2005) A tutorial on the cross-entropy method. Ann Oper Res 134(1):19–67

    Article  MathSciNet  MATH  Google Scholar 

  • Dick J, Pillichshammer F (2010) Digital Nets and Sequences: Discrepancy Theory and quasi-Monte Carlo Integration. Cambridge University Press, Cambridge

    Book  MATH  Google Scholar 

  • Glasserman P, Heidelberger P, Shahabuddin P (1999) Asymptotically optimal importance sampling and stratification for pricing path-dependent options. Math Financ 9(2):117–152

    Article  MathSciNet  MATH  Google Scholar 

  • Glasserman P, Heidelberger P, Shahabuddin P (2000) Variance reduction techniques for estimating Value-at-Risk. Manage Sci 46(10):1349–1364

    Article  MATH  Google Scholar 

  • Glasserman P, Heidelberger P, Shahabuddin P (2002) Portfolio value-at-risk with heavy-tailed risk factors. Math Financ 12(3):239–269

    Article  MathSciNet  MATH  Google Scholar 

  • Glasserman P, Li J (2005) Importance sampling for portfolio credit risk. Manage Sci 51(11):1643–1656

    Article  MATH  Google Scholar 

  • Härdle W, Hall P, Ichimura H (1993) Optimal smoothing in single-index models. Ann Stat 21(1):157–178

    Article  MathSciNet  MATH  Google Scholar 

  • Harris W, Helvig T (1965) Marginal and conditional distributions of singular distributions. Publications of the Research Institute for Mathematical Sciences, Kyoto University Ser A 1(2):199–204

    Article  MathSciNet  MATH  Google Scholar 

  • Hörmann W, Leydold J (2003) Continuous random variate generation by fast numerical inversion. ACM Trans Model Comput Simul 13(4):347–362

    Article  MATH  Google Scholar 

  • Ichimura H (1993) Semiparametric least squares (SLS) and weighted SLS estimation of single-index models. J Econ 58(1–2):71–120

    Article  MathSciNet  MATH  Google Scholar 

  • Kahn H, Marshall A (1953) Methods of reducing sample size in Monte Carlo computations. J Oper Res Soc Am 1(5):263–278

    MATH  Google Scholar 

  • Karlin S, Taylor H (1975) A First Course in Stochastic Processes vol. 1. Gulf Professional Publishing, Houston

  • Kole E, Koedijk K, Verbeek M (2007) Selecting copulas for risk management. J Bank Financ 31(8):2405–2423

    Article  Google Scholar 

  • Katafygiotis L, Zuev K (2008) Geometric insight into the challenges of solving high-dimensional reliability problems. Probab Eng Mech 23(2–3):208–218

    Article  Google Scholar 

  • Kvalseth T (1985) Cautionary note about R3. Am Stat 39(4):279–285

    Google Scholar 

  • Lavenberg S, Welch P (1981) A perspective on the Use of Control Variables to Increase the Efficiency of Monte Carlo simulations. Manage Sci 27(3):322–335

    Article  MathSciNet  MATH  Google Scholar 

  • Lemieux, C.: Monte Carlo and Quasi-Monte Carlo Sampling. Springer, Berlin (2009). https://doi.org/10.1007/978-0-387-78165-5

  • Leydold J, Hörmann W (2020) Runuran: R Interface to the ’UNU.RAN’ Random Variate Generators. R package version 0.30. https://CRAN.R-project.org/package=Runuran

  • Li K (1991) Sliced inverse regression for dimension reduction. J Am Stat Assoc 86(414):316–327

    Article  MathSciNet  MATH  Google Scholar 

  • Loeve M (1963) Probability Theory. Van Nostrand, New York

    MATH  Google Scholar 

  • Niederreiter H (1978) Quasi-Monte Carlo methods and pseudo-random numbers. Bull Am Math Soc 84(6):957–1041

    Article  MathSciNet  MATH  Google Scholar 

  • Neddermeyer J (2011) Non-parametric partial importance sampling for financial derivative pricing. Quantitative Finance 11(8):1193–1206

    Article  MathSciNet  MATH  Google Scholar 

  • Powell JL, Stock JH, Stoker TM (1989) Semiparametric estimation of index coefficients. Econometrica 1403–1430

  • R Core Team (2020) R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria. R Foundation for Statistical Computing. http://www.R-project.org

  • Reinsch C (1967) Numer Math 10(3):177–183

    Article  MathSciNet  Google Scholar 

  • Rubinstein R (1997) Optimization of computer simulation models with rare events. Eur J Oper Res 99(1):89–112

    Article  MATH  Google Scholar 

  • Rubinstein R, Kroesse D (2013) The Cross-entropy Method: a Unified Approach to Combinatorial Optimization. Monte-Carlo Simulation and Machine Learning. Springer, Berlin

    Google Scholar 

  • Sobol’ I (1967) On the distribution of points in a cube and the approximate evaluation of integrals. USSR Comput Math Math Phys 7(4):86–112. https://doi.org/10.1016/0041-5553(67)90144-9

    Article  MathSciNet  MATH  Google Scholar 

  • Sak H, Hörmann W, Leydold J (2010) Efficient risk simulations for linear asset portfolios in the t-copula model. Eur J Oper Res 202(3):802–809

    Article  MATH  Google Scholar 

  • Stoker T (1986) Consistent estimation of scaled coefficients. Econometrica 54(6):1461–1481

    Article  MathSciNet  MATH  Google Scholar 

  • Schüeller G, Pradlwarter H, Koutsourelakis P (2004) A critical appraisal of reliability estimation procedures for high dimensions. Probab Eng Mech 19(4):463–474

    Article  Google Scholar 

  • Wang X, Fang K (2003) The effective dimension and quasi-Monte Carlo integration. J Complex 19(2):101–124. https://doi.org/10.1016/S0885-064X(03)00003-7

    Article  MathSciNet  MATH  Google Scholar 

  • Wang X, Sloan I (2005) Why are high-dimensional finance problems often of low effective dimension? SIAM J Sci Comput 27(1):159–183. https://doi.org/10.1137/S1064827503429429

    Article  MathSciNet  MATH  Google Scholar 

  • Wang X (2006) On the effects of dimension reduction techniques on some high-dimensional problems in finance. Oper Res 54(6):1063–1078

    Article  MathSciNet  MATH  Google Scholar 

  • Wang L, Brown L, Cai T, Levine M (2008) Effect of mean on variance function estimation in nonparametric regression. Ann Stat 646–664

Download references

Acknowledgements

The second and third author would like to thank NSERC for financial support for this work through Discovery Grant RGPIN-5010-2015 and Grant RGPIN-238959, respectively. We also thank an anonymous referee for their insightful comments which helped improve this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Erik Hintz.

Ethics declarations

Conflicts of Interest

The authors declare no conflicts of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

1.1 Proofs

Proof of Proposition 1

The mean and variance follow from

$$\mathbb {E}_g(\hat{\mu }^{\tiny{\text{SIS}}}_n) = E_g(\Psi (\varvec{X}) w(T)) = \mathbb {E}_g(m(T)w(T)) = \int _{\Omega _g} m(t) \frac{f_T(t)}{g_T(t)}g_T(t)\;\mathrm {d}t=\mu _{\tiny{\text{SIS}}}$$

and

$$n {\text {Var}}_g(\hat{\mu }^{\tiny{\text{SIS}}}_n) + \mu _{\tiny{\text{SIS}}}^2 = \mathbb {E}_g\left( \Psi ^2(\varvec{X})w(T)\right) =\int _{\Omega _g} m^{(2)}(t)\frac{f_T^2(t)}{g_T^2(t)}\;\mathrm {d}t.$$

Asymptotic normality follows from the central limit theorem. Next, we need to find \(g_T\) among all g that give unbiased estimators so that the variance, or equivalently \(\mathbb {E}_g(m^{(2)}(T)w(T))\), is minimal when \(\Psi (\varvec{x})\ge 0\) or \(\Psi (\varvec{x})\le 0\) for all \(\varvec{x}\in \Omega\). Let \(\Omega _{\tiny{\text{ub}}}=\{t\in \Omega _f:m(t)f_T(t)\ne 0\}\). By Jensen’s inequality,

$$\begin{aligned} \mathbb {E}_g\left( m^{(2)}(T)w^2(T)\right)&\ge \left( \mathbb {E}_g\left( \sqrt{m^{(2)}(T)}w(T)\right) \right) ^2 \\&= \left( \int _{\Omega _g} \sqrt{m^{(2)}(t)}w(t)\;\mathrm {d}t\right) ^2 = \left( \int _{\Omega _f} \sqrt{m^{(2)}(t)}w(t)\;\mathrm {d}t\right) ^2 \end{aligned}$$

The last inequality follows since \(\hat{\mu }^{\tiny{\text{SIS}}}_n\) is assumed to be unbiased, i.e., \(\Omega _{\tiny{\text{ub}}}\subseteq \Omega _g\) and the fact that \(\sqrt{m^{(2)}(t)}f_T(t)=0\) for \(t\not \in \Omega _{\tiny{\text{ub}}}\) (as \(m(t)=0\) implies \(m^{(2)}(t)=0\) by the assumption on \(\Psi\)). The right hand side of the inequality is a constant independent of the choice of \(g_T\), namely the minimum variance among all SIS estimators. To achieve equality, or equivalently to minimize the variance, set \(g_T\propto \sqrt{m^{(2)}}(t)f_T(t)\) for \(t\in \Omega _{\tiny{\text{ub}}}\) and the claim follows.

Proof of Proposition 2

Let \(\Omega _T^{(i)}=\{t\in (t_{\inf },t_{\sup }): \lambda _{i} \le t < \lambda _{i+1}\}\) where \(\lambda _i=G_T^\leftarrow ( (i+1)/n)\) and note that \(\mathbb {P}(T\in \Omega _T^{(i)})=1/n\) for \(i=1,\dots ,n\). Then

$$\begin{aligned} \mathbb {E}(\hat{\mu }^{\tiny{\text{SSIS}}}_n)&=\frac{1}{n}\sum _{i=1}^n \mathbb {E}_g\left( \Psi (\varvec{X})w(T)\mid T \in \Omega _T^{(i)}\right) =\frac{1}{n}\sum _{i=1}^n \mathbb {E}_g\left( \mathbb {E}_g(\Psi (\varvec{X})w(T)\mid T) \mid T \in \Omega _T^{(i)}\right) \\&= \frac{1}{n}\sum _{i=1}^n \int _{\lambda _i}^{\lambda _{i+1}}m(t) \frac{f_T(t)}{g_T(t)}g_T(t)\;\mathrm {d}t = \mu _{\tiny{\text{SIS}}} \end{aligned}$$

The expression for the variance is a slight generalization of (Glasserman et al. (1999), Lemma 4.1) in that stratification is combined with IS, bit it can be proved similarly. Let \(\eta _{n}(t)\) denote the index i so that \(t\in \Omega _T^{(i)}\). Then

$$\begin{aligned} n{\text {Var}}(\hat{\mu }^{\tiny{\text{SSIS}}}_n)=\frac{1}{n}\sum _{j=1}^n {\text {Var}}_g\left( \Psi (\varvec{X})w(T)\mid T \in \Omega _T^{(i)}\right) =\mathbb {E}_g\left( {\text {Var}}_g\left( \Psi (\varvec{X})w(T)\mid \eta _n(T)\right) \right) . \end{aligned}$$

Let \(\xi =\mathbb {E}_g(\Psi (\varvec{X})w(T)\mid T)=m(T)w(T)\) and define the sequence \(\xi _n=\mathbb {E}_g(\xi \mid \eta _n(T))\). Note that the \(\sigma -\)algebra generated by \(\eta _n(T)\) forms an increasing family as n increases through a constant multiple of power two. Observe that \(\mathbb {E}_g(|xi|)<\infty\) and \(\sup _n \xi _n< \mathbb {E}_g(\Psi ^(\varvec{X})w^2(T))=\mathbb {E}_g(m^{(2)}(T)w^2(T))<\infty\). Also, \(\xi _n\) is a martingale if n increases through a constant multiple of powers of two as it is a Doob’s martingale; (see Karlin and Taylor (1975), p. 246). Then using the arguments as in (Glasserman et al. (1999), Lemma 4.1), it follows that \({\text {Var}}_g(\hat{\mu }^{\tiny{\text{SSIS}}}_n)=\sigma _{\tiny{\text{SIS}}}^2/n+o(1)\).

The expression for the optimal density and variance expressions follow as in the proof of Prop. 1 by applying Jensen’s inequality. It remains to show that the SSIS estimator is asymptotically normal, which we show by applying the Lyapunov Central Theorem; (see Kole et al. (2007), p. 134). Let \(m_i = \mathbb {E}_g(\Psi (\varvec{X})w(T)\mid T\in \Omega _T^{(i)})\) and \(v_i^2={\text {Var}}_g(\Psi (\varvec{X})w(T)\mid T\in \Omega _T^{(i)})\). It is easily seen that \((1/n)\sum _{i=1}^n m_i=\mu_{\tiny{\text{SIS}}}\) and \((1/n)\sum _{i=1}^n v_i^2=\sigma _{\tiny{\text{SIS}}}^2+o(1)\). For any \(i=1,\dots ,n\), we have

$$\begin{aligned}&\mathbb {E}_g\left( |\Psi (\varvec{X}_i)w(T_i)-m_i|^{2+\delta }\right) \le 2^{2+\delta }\left( \mathbb {E}_g\left( |\Psi (\varvec{X}_i)w(T_i)|^{2+\delta }\right) +\mathbb {E}_g\left( |m_i| ^{2+\delta }\right) \right) \\&= 2^{2+\delta } \left( \mathbb {E}_g\left( |\Psi (\varvec{X})w(T)|^{2+\delta } \mid T \in \Omega _T^{(i)}\right) + \mathbb {E}_g\left( |\mathbb {E}_g(\Psi (\varvec{X})w(T)\mid T\in \Omega _T^{(i)})|^{2+\delta }\right) \right) \\&\le 2^{2+\delta } \left( \mathbb {E}_g\left( |\Psi (\varvec{X})w(T)|^{2+\delta } \mid T \in \Omega _T^{(i)}\right) + \mathbb {E}_g\left( \mathbb {E}_g(|\Psi (\varvec{X})w(T)|^{2+\delta }\mid T\in \Omega _T^{(i)}) \right) \right) \\&=2^{3+\delta }\mathbb {E}_g\left( |\Psi (\varvec{X})w(T)|^{2+\delta }\mid T\in \Omega _T^{(i)}\right) , \end{aligned}$$

where the first inequality follows from the \(c_{\tau }\) inequality as in (Loeve (1963), p. 155). The Lyapunov condition is satisfied, since

$$\begin{aligned}&\frac{1}{(\sum _{i=1}^n\sigma _i^2)^{1+\delta /2}} \sum _{i=1}^n\mathbb {E}_g\left( |\Psi (\varvec{X}_i)w(T_i)-m_i|^{2+\delta }\right) \\&\le \frac{2^{3+\delta }}{(\sum _{i=1}^n\sigma _i^2)^{1+\delta /2}} \sum _{i=1}^n\mathbb {E}_g\left( |\Psi (\varvec{X}_i)w(T_i)|^{2+\delta }\mid T\in \Omega _T^{(i)}\right) \\&= \frac{2^{3+\delta } n}{(n\sigma _{\tiny{\text{SSIS}}}^2+o(n))^{1+\delta /2}} \mathbb {E}_g\left( |\Psi (\varvec{X})w(T)|^{2+\delta }\right) \rightarrow 0,\quad n\rightarrow \infty , \end{aligned}$$

by the assumption. The Lyapunov Central Limit Theorem together with Slutsky’s Theorem implies \((\hat{\mu }^{\tiny{\text{SSIS}}}_n-\mu _{\tiny{\text{SIS}}})/\sqrt{n}\underset{}{\overset{\tiny {\text {d}}}{\rightarrow }}{\text {N}}(0,\sigma _{\tiny{\text{SSIS}}}^2)\).

Proof of Proposition 3

Recall that \(T_i\) satisfies \(T_i = G_T^\leftarrow ( (i+U_i-1)/n)\) where \(U_i\overset{\tiny {\text {ind.}}}{\sim }\text {U}(0,1)\) for \(i=1,\dots ,n\), and are therefore ordered, i.e., \(T_1<T_2<\dots <T_n\). For any \(i=1,\dots ,n\),

$$T_{i+1}-T_i=(G_T^{-1})'(\xi _i)\left( \frac{1+U_{i+1}-U_i}{n}\right) =\frac{1}{g_T(G_T^{-1}(\xi _i))}\left( \frac{1+U_{i+1}-U_i}{n}\right) =\mathcal {O}(1/n),$$

for some \(\xi _i\in (T_i, T_{i+1})\), which implies that for any continuously differentiable function h, \(h(T_{i+1})=h(T_i)+\mathcal {O}(1/n)\). Then we have

$$\begin{aligned} r_i^2&=\left( m(T_{i+1})+\varepsilon _{T_{i+1}} - m(T_i) - \varepsilon _{T_i}\right) ^2\\&= \left( m(T_{i+1})-m(T_i)\right) ^2 + \left( \varepsilon _{T_{i+1}}-\varepsilon _{T_i}\right) ^2-2(m(T_{i+1}) - m(T_i))(\varepsilon _{T_{i+1}}-\varepsilon _{T_i})\\&= (\varepsilon _{T_{i+1}}-\varepsilon _{T_i})^2 - 2(m(T_{i+1}) - m(T_i))(\varepsilon _{T_{i+1}}-\varepsilon _{T_i})+\mathcal {O}(1/n^2), \end{aligned}$$

and so

$$\begin{aligned} \mathbb {E}_g\left( r_i^2 w^2(T_i)\right)&= \mathbb {E}_g\left( \mathbb {E}_g(r_i^2w(T_i)\mid T_i, T_{i+1})\right) =\mathbb {E}_g\left( w^2(T_i)(v^2(T_i)+v^2(T_{i+1}))\right) +\mathcal {O}(1/n^2)\\&= 2 \mathbb {E}_g\left( w^2(T_i)v^2(T_i)\right) +\mathcal {O}(1/n), \end{aligned}$$

which means that

$$\begin{aligned} \mathbb {E}_g(\hat{\sigma }_{\tiny{\text{SSIS}}}^2)&= \frac{1}{2(n-1)}\sum _{i=1}^n \mathbb {E}_g(r_i^2w^2(T_i))=\frac{1}{n}\sum _{i=1}^n\mathbb {E}_g\left( v^2(T)w^2(T)\mid T\in \Omega _T^{(i)}\right) +\mathcal {O}(1/n)\\&= \mathbb {E}_g(v^2(T)w^2(T)) + \mathcal {O}(1/n) = \sigma _{\tiny{\text{SSIS}}}^2+ \mathcal {O}(1/n)\rightarrow \sigma _{\tiny{\text{SSIS}}}^2, \end{aligned}$$

which shows consistency.

Proof of Proposition 4

We use that \((\varvec{X}\mid T = t) \sim {\text {N}}_d(\varvec{\beta }t,I_d-\varvec{\beta }\varvec{\beta }^{\top })\)(see Harris and Helvig (1965), Theorem 1) to compute the moment generating function of \(\varvec{X}\). For \(\varvec{a}\in \mathbb {R}^d\),

$$\begin{aligned} {\mathbb {E}_g(}&\mathbb {E}_g(\exp (\varvec{a}^{\top }\varvec{X})) = \mathbb {E}_g \left( \mathbb {E}(\exp (\varvec{a}^{\top }\varvec{X}) \mid T)\right) = \mathbb {E}_g\left( \exp \left( \varvec{a}^{\top }\varvec{\beta }T + \frac{1}{2} \varvec{a}^{\top }(I_d-\varvec{\beta }\varvec{\beta }^{\top })\varvec{a}\right) \right) \\&=\mathbb {E}_g(\exp (\varvec{a}^{\top }\varvec{\beta }T ))\exp \left( \frac{1}{2} \varvec{a}^{\top }(I_d-\varvec{\beta }\varvec{\beta }^{\top })\varvec{a})\right) =\exp \left( c\varvec{a}^{\top }\varvec{\beta }+\frac{1}{2}(\varvec{a}^{\top }\varvec{\beta })^2\sigma ^2\right) \times \\&\times \exp \left( \frac{1}{2} \varvec{a}^{\top }(I_d-\varvec{\beta }\varvec{\beta }^{\top })\varvec{a})\right) = \exp \left( \varvec{a}^{\top }(c\varvec{\beta }) + \frac{1}{2}\varvec{a}^{\top }(I_d+(\sigma ^2-1)\varvec{\beta }\varvec{\beta }^{\top }\varvec{a}\right) . \end{aligned}$$

By uniqueness of the moment generating function, \(\varvec{X}\sim {\text {N}}_d(c\varvec{\beta }, I_d+(\sigma ^2-1)\varvec{\beta }\varvec{\beta }^{\top })\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hintz, E., Hofert, M., Lemieux, C. et al. Single-Index Importance Sampling with Stratification. Methodol Comput Appl Probab 24, 3049–3073 (2022). https://doi.org/10.1007/s11009-022-09970-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11009-022-09970-1

Keywords

Navigation