Skip to main content
Log in

Exploiting recommendation confidence in decision-aware recommender systems

  • Published:
Journal of Intelligent Information Systems Aims and scope Submit manuscript

Abstract

The main goal of a Recommender System is to suggest relevant items to users, although other utility dimensions – such as diversity, novelty, confidence, possibility of providing explanations – are often considered. In this work, we investigate about confidence but from the perspective of the system: what is the confidence a system has on its own recommendations; more specifically, we focus on different methods to embed awareness into the recommendation algorithms about deciding whether an item should be suggested. Sometimes it is better not to recommend than fail because failure can decrease user confidence in the system. In this way, we hypothesise the system should only show the more reliable suggestions, hence, increasing the performance of such recommendations, at the expense of, presumably, reducing the number of potential recommendations. Different from other works in the literature, our approaches do not exploit or analyse the input data but intrinsic aspects of the recommendation algorithms or of the components used during prediction are considered. We propose a taxonomy of techniques that can be applied to some families of recommender systems allowing to include mechanisms to decide if a recommendation should be generated. In particular, we exploit the uncertainty in the prediction score for a probabilistic matrix factorisation algorithm and the family of nearest-neighbour algorithms, the support of the prediction score for nearest-neighbour algorithms, and a method independent of the algorithm. We study how the performance of a recommendation algorithm evolves when it decides not to recommend in some situations. If the decision of avoiding a recommendation is sensible – i.e., not random but related to the information available to the system about the target user or item –, the performance is expected to improve at the expense of other quality dimensions such as coverage, novelty, or diversity. This balance is critical, since it is possible to achieve a very high precision recommending only one item to a unique user, which would not be a very useful recommender. Because of this, on the one hand, we explore some techniques to combine precision and coverage metrics, an open problem in the area. On the other hand, a family of metrics (correctness) based on the assumption that it is better to avoid a recommendation rather than providing a bad recommendation is proposed herein. In summary, the contributions of this paper are twofold: a taxonomy of techniques that can be applied to some families of recommender systems allowing to include mechanisms to decide if a recommendation should be generated, and a first exploration to the combination of evaluation metrics, mostly focused on measures for precision and coverage. Empiric results show that large precision improvements are obtained when using these approaches at the expense of user and item coverage and with varying levels of novelty and diversity.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. For instance, in a Normal distribution, λ = 1.96 for a confidence interval with a significance level of 0.05.

  2. https://grouplens.org/datasets/movielens/

  3. http://eigentaste.berkeley.edu/dataset/

  4. https://github.com/RankSys/RankSys

  5. https://github.com/recommenders/rival

References

  • Adomavicius, G., Kamireddy, S., Kwon, Y. (2007). Towards more confident recommendations: Improving recommender systems using filtering approach based on rating variance. In WITS 2007 - Proceedings, 17th Annual Workshop on Information Technologies and Systems (pp. 152–157). Social Science Research Network.

  • Baeza-Yates, R.A., & Ribeiro-Neto, B. A. (2011). Modern Information Retrieval - the concepts and technology behind search, 2nd edn. Harlow: Pearson Education Ltd.

    Google Scholar 

  • Bell, R.M., & Koren, Y. (2007). Lessons from the netflix prize challenge. SIGKDD Explorations, 9(2), 75–79.

    Article  Google Scholar 

  • Bellogin, A., Castells, P., Cantador, I. (2011). Precision-oriented evaluation of recommender systems: an algorithmic comparison. In Proceedings of the fifth ACM conference on Recommender systems (pp. 333–336): ACM.

  • Bishop, C.M. (2006). Pattern recognition and machine learning. Berlin: Springer.

    MATH  Google Scholar 

  • Box, G.E., & Tiao, G.C. (2011). Bayesian inference in statistical analysis Vol. 40. NY: Wiley.

    Google Scholar 

  • Castells, P., Hurley, N.J., Vargas, S. (2015). Novelty and diversity in recommender systems. In Recommender Systems Handbook (pp. 881–918): Springer.

  • Cremonesi, P., Koren, Y., Turrin, R. (2010). Performance of recommender algorithms on top-n recommendation tasks. In Amatriain, X., Torrens, M., Resnick, P., Zanker, M. (Eds.) Proceedings of the 2010 ACM Conference on Recommender Systems, RecSys 2010 (pp. 39–46). Barcelona: ACM.

  • Ekstrand, M.D., Riedl, J.T., Konstan, J.A. (2011). Collaborative filtering recommender systems. Foundations and Trends in Human-Computer Interaction, 4(2), 81–173.

    Article  Google Scholar 

  • Gunawardana, A., & Shani, G. (2015). Evaluating recommender systems. In Recommender Systems Handbook (pp. 265–308): Springer.

  • Herlocker, J.L., Konstan, J.A., Riedl, J. (2002). An empirical analysis of design choices in neighborhood-based collaborative filtering algorithms. Information Retrieval, 5(4), 287–310.

    Article  Google Scholar 

  • Herlocker, J.L., Konstan, J.A., Terveen, L.G., Riedl, J. (2004). Evaluating collaborative filtering recommender systems. ACM Trans Inf Syst, 22(1), 5–53.

    Article  Google Scholar 

  • Himabindu, T.V.R., Padmanabhan, V., Pujari, A.K., Sattar, A. (2016). Prediction with confidence in item based collaborative filtering. In Booth, R., & Zhang, M. (Eds.) PRICAI 2016: Trends in Artificial Intelligence - 14th Pacific Rim International Conference on Artificial Intelligence. https://doi.org/10.1007/978-3-319-42911-3_11, (Vol. 9810 pp. 125–138). Phuket: Proceedings, Springer, Lecture Notes in Computer Science.

    Chapter  Google Scholar 

  • Hu, Y., Koren, Y., Volinsky, C. (2008). Collaborative filtering for implicit feedback datasets (pp. 263–272): ICDM, IEEE Computer Society.

  • Hull, D.A. (1993). Using statistical testing in the evaluation of retrieval experiments. In SIGIR (pp 329–338).

  • Jannach, D., & Adomavicius, G. (2016). Recommendations with a purpose. In Sen, S., Geyer, W., Freyne, J., Castells, P. (Eds.) Proceedings of the 10th ACM Conference on Recommender Systems. https://doi.org/10.1145/2959100.2959186 (pp. 7–10). Boston: ACM.

  • Jugovac, M., Jannach, D., Lerche, L. (2017). Efficient optimization of multiple recommendation quality factors according to individual user tendencies. Expert Systems with Applications, 81, 321–331. https://doi.org/10.1016/j.eswa.2017.03.055.

    Article  Google Scholar 

  • Karypis, G. (2001). Evaluation of item-based top-n recommendation algorithms. In Proceedings of the tenth international conference on Information and knowledge management (pp. 247–254): ACM.

  • Koren, Y., Bell, R., Volinsky, C., et al. (2009). Matrix factorization techniques for recommender systems. Computer, 42(8), 30–37.

    Article  Google Scholar 

  • Lacerda, A. (2017). Multi-objective ranked bandits for recommender systems. Neurocomputing, 246, 12–24. https://doi.org/10.1016/j.neucom.2016.12.076.

    Article  Google Scholar 

  • Latha, R., & Nadarajan, R. Dziech, A., Leszczuk, M., Baran, R. (Eds.). (2015). Ranking based approach for noise handling in recommender systems. Cham: Springer International Publishing.

  • Lim, Y.J., & Teh, Y.W. (2007). Variational bayesian approach to movie rating prediction. In Proceedings of KDD cup and workshop, (Vol. 7 pp. 15–21): Citeseer.

  • Linden, G., Smith, B., York, J. (2003). Amazon. com recommendations: Item-to-item collaborative filtering. IEEE Internet Computing, 7(1), 76–80.

    Article  Google Scholar 

  • Mazurowski, M.A. (2013). Estimating confidence of individual rating predictions in collaborative filtering recommender systems. Expert Systems with Applications, 40 (10), 3847–3857. https://doi.org/10.1016/j.eswa.2012.12.102.

    Article  Google Scholar 

  • McNee, S.M., Lam, S.K., Guetzlaff, C., Konstan, J.A., Riedl, J. (2003). Confidence displays and training in recommender systems. In INTERACT: IOS Press.

  • McNee, S.M., Riedl, J., Konstan, J.A. (2006). Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI (pp. 1097–1101): ACM.

  • Nakajima, S., & Sugiyama, M. (2011). Theoretical analysis of bayesian matrix factorization. Journal of Machine Learning Research, 12(Sep), 2583–2648.

    MathSciNet  MATH  Google Scholar 

  • Ning, X., Desrosiers, C., Karypis, G. (2015). A comprehensive survey of neighborhood-based recommendation methods. In Recommender Systems Handbook (pp. 37–76): Springer.

  • O’Donovan, J., & Smyth, B. (2005). Trust in recommender systems. In IUI (pp. 167–174): ACM.

  • Peñas, A., & Rodrigo, Á. (2011). A simple measure to assess non-response. In Lin, D., Matsumoto, Y., Mihalcea, R. (Eds.) The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 1415–1424). Portland: The Association for Computer Linguistics.

  • Salakhutdinov, R., & Mnih, A. (2011). Probabilistic matrix factorization. In NIPS, (Vol. 20 pp. 1–8).

  • Sarwar, B., Karypis, G., Konstan, J., Riedl, J. (2001). Item-based collaborative filtering recommendation algorithms. In Proceedings of the 10th international conference on World Wide Web (pp. 285–295): ACM.

  • Smyth, B., Wilson, D.C., O’Sullivan, D. (2002). Data mining support for case-based collaborative recommendation. In AICS, Springer, Lecture Notes in Computer Science, (Vol. 2464 pp. 111–118).

    Chapter  Google Scholar 

  • Srebro, N., Jaakkola, T., et al. (2003). Weighted low-rank approximations. In Icml, (Vol. 3 pp. 720–727).

  • Toledo, R.Y., Mota, Y.C., Martínez-lȯpez, L. (2015). Correcting noisy ratings in collaborative recommender systems. Knowledge-Based and System, 76, 96–108. https://doi.org/10.1016/j.knosys.2014.12.011.

    Article  Google Scholar 

  • Vargas, S., & Castells, P. (2011). Rank and relevance in novelty and diversity metrics for recommender systems. In Proceedings of the fifth ACM conference on Recommender systems (pp. 109–116): ACM.

  • Wang, S., Gong, M., Li, H., Yang, J. (2016). Multi-objective optimization for long tail recommendation. Knowledge-Based Systems, 104, 145–155. https://doi.org/10.1016/j.knosys.2016.04.018.

    Article  Google Scholar 

  • Zhang, M., Guo, X., Chen, G., Wei, Q. (2014). Predicting consumer information search benefits for personalized online product ranking: a confidence-based approach. In Siau, K., Li, Q., Guo, X. (Eds.) 18th Pacific Asia Conference on Information Systems, PACIS 2014. http://aisel.aisnet.org/pacis2014/375 (p. 375). Chengdu.

  • Zhang, M., Guo, X., Chen, G. (2016). Prediction uncertainty in collaborative filtering: Enhancing personalized online product ranking. Decision Support Systems, 83, 10–21. https://doi.org/10.1016/j.dss.2015.12.004.

    Article  Google Scholar 

Download references

Acknowledgements

This work was funded by the national Spanish Government under project TIN2016-80630-P. The authors also acknowledge the very helpful feedback from the three anonymous reviewers.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alejandro Bellogín.

Appendix: Complete derivation of variational Bayesian

Appendix: Complete derivation of variational Bayesian

The goal of this algorithm is to minimise the objective function in (6) using a probabilistic model where R are the observations, U and I are the parameters, and some regularisation is introduced through priors on these parameters. As before, user u is represented by the u-th row of matrix U and item i is represented by the i-th column of matrix I; nonetheless, from now on, we shall consider these column vectors as row vectors.

The first hypothesis of the model is that the rating probability, considering matrices U and I, follows a Normal distribution with mean ui and variance τ2:

$$ P(r_{ui}\mid U,I) = N(u^{\top} \cdot i, \tau^{2}) $$
(32)

Moreover, the entries of these matrices U and I follow independent distributions; in particular, \(u_{l} \sim N(0,{\sigma _{l}^{2}})\) and \(i_{l} \sim N(0,{\rho _{l}^{2}})\). As a consequence, the density functions of U and I are formulated as in (33):

$$ P(U) = \prod\limits_{u = 1}^{V}\prod\limits_{l = 1}^{D}N(u_{l}\mid 0,{\sigma_{l}^{2}}) \qquad P(I) = \prod\limits_{i = 1}^{J}{\prod}_{l = 1}^{D}N(i_{l}\mid 0,{\rho_{l}^{2}}) $$
(33)

The goal now is to compute or approximate the posterior probability P(U, IR), so, in this way, we can compute a predictive distribution on ratings given the observation matrix, as in (34):

$$ P(\hat{r}_{ui}\mid R) = \int P(\hat{r}_{ui}\mid u,i,\tau^{2})P(U,I\mid R) $$
(34)

Since computing this posterior probability P(U, IR) is unfeasible, the proposed algorithm aims at approximating such distribution by using Bayesian inference, more specifically, through the method known as Mean-Field Variational Inference.

1.1 Bayesian inference

Bayesian inference is a method that approximates a posterior distribution of a set of unobserved variables Z = {Z1, ⋯ , ZD} knowing a subset of information R, P(ZR), through a variational distribution Q(Z) (Box and Tiao 2011). For this, a function Q(Z) is sought that minimises the dissimilarity function d(Q, P). In the specific case of the Mean-Field Variational Inference method, such a dissimilarity function is the Kullback-Leibler divergence between P and Q (Bishop 2006):

$$ D_{KL}(Q \mid \mid P) = \sum\limits_{Z}Q(Z) \log{\frac{Q(Z)}{P(Z \mid R)}} = \sum\limits_{Z} Q(Z) \log \frac{Q(Z)}{P(Z,R)} + \log P(R) $$
(35)

where we use that P(ZR) = P(Z, R)/P(R).

Furthermore, by solving (35) for log P(R) we obtain the following (36):

$$ \log P(R) = D_{KL}(Q \mid \mid P) - \sum\limits_{Z}Q(Z) \log \frac{Q(Z)}{P(Z,R)} = D_{KL}(Q \mid \mid P) + \mathscr{F}(Q) $$
(36)

If we set log P(R) as a constant in (36), it is enough with maximising \(\mathscr{F}(Q)\) to minimise DKL(Q∣∣P).

Indeed, \(\mathscr{F}(Q)\) is denoted as the (negative) variational free energy, since it can be written as a sum of Q’s entropy and an energy (37):

$$\begin{array}{@{}rcl@{}} \mathscr{F}(Q) &=& -\sum\limits_{Z}Q(Z) \log Q(Z) + \sum\limits_{Z}{Q(Z) \log{P(Z,R)}}\\ &=& H(Q) + \mathbb{E}[\log P(Z,R)] = \mathbb{E}_{Q(z)}[\log P(Z,R) - \log Q(Z)] \end{array} $$
(37)

1.2 Bayesian inference in VB

Once we apply the Mean-Field Variational Inference method presented above where Z = {U, I} are the unobserved variables, and the rating matrix R is the known information, the goal is to estimate a variational distribution function Q(U, I) that approximates the distribution P(U, IR).

For this, we aim at maximising the so-called (negative) variational free energy defined as in (38):

$$\begin{array}{@{}rcl@{}} \mathscr{F}(Q(U,I)) &=& \mathbb{E}_{Q(U,I)}[\log P(U,I,R) - \log Q(U,I)]\\ &=& -\sum Q(U,I)(\log Q(U,I) - \log P(U,I,R)) \end{array} $$
(38)

In practice, it is intractable to maximise F(U, I) and, because of this, this algorithm applies another simplification by considering that Q(U, I) = Q(U)Q(I). We now arrive to the definitive formulation for \(\mathscr{F}(Q(U)Q(I))\), (39), as it is defined in Lim and Teh (2007):

$$\begin{array}{@{}rcl@{}} \mathscr{F}(Q(U)Q(I)) &=& \mathbb{E}_{Q(U)Q(I)}[\log P(U,I,R) - \log Q(U,I)]\\ &=& \mathbb{E}_{Q(U)Q(I)}\left[\log \frac{P(R \mid U,I)}{P(U,I)}\right] + H\left( Q(U,I)\right)\\ &=& \mathbb{E}_{Q(U)Q(I)}\left[\log P(R \mid U,I) - \log P(U) - \log P(I)\right] + H(Q(U,I))\\ &=& -\frac{K}{2} \log (2\pi \tau^{2}) -\frac{1}{2}\sum\limits_{(u,i)}\frac{\mathbb{E}_{Q(U)Q(I)}[(r_{ui}-u^{\top} i)^{2}]}{\tau^{2}}\\ &&- \frac{V}{2} \sum\limits_{l = 1}^{D}\log (2\pi {\sigma_{l}^{2}}) -\frac{1}{2}\sum\limits_{l = 1}^{D}\frac{\sum\limits_{u = 1}^{V}\mathbb{E}_{Q(U)}[{u_{l}^{2}}]}{{\sigma_{l}^{2}}}\\ &&-\frac{J}{2}\sum\limits_{l = 1}^{D}\log (2\pi {\rho_{l}^{2}}) -\frac{1}{2}\sum\limits_{l = 1}^{D} \frac{\sum\limits_{i = 1}^{J}\mathbb{E}_{Q(I)}[{i_{l}^{2}}]}{{\rho_{l}^{2}}}+ H(Q(U,I)) \end{array} $$
(39)

where K is the number of observed entries in R, or, in other terms, the number of ratings or interactions known by the system at training time.

Maximising \(\mathscr{F}(Q(U)Q(I))\) with respect to Q(U) keeping Q(I) fixed, and viceversa, we obtain the distributions for items and users. Equations (40) and (41) describe the covariance matrix ϕu and user’s u mean \(\overline {u}\) in Q(U), respectively.

$$ \phi_{u} = \left( \left( \begin{array}{cccc} \frac{1}{{\sigma_{1}^{2}}} & & & 0 \\ & & {\ddots} & \\ 0 & & & \frac{1}{{\sigma_{D}^{2}}} \end{array}\right) + \sum\limits_{i \in O(u)}\frac{\psi_{i} + \overline{i}^{\top} \overline{i}}{\tau^{2}} \right)^{-1} $$
(40)
$$ \overline{u} = \phi_{u} \left( \sum\limits_{i \in O(u)}\frac{r_{ui}\overline{i}}{\tau^{2}} \right) $$
(41)

where O(u) are the items observed by user u, which corresponds to the identifiers i such that rui was observed in the rating matrix R.

Similarly, (42) and (43) show the formulation for the covariance matrix ψi and item’s i mean \(\overline {i}\) in Q(I), respectively.

$$ \psi_{i} = \left( \left( \begin{array}{cccc} \frac{1}{{\rho_{1}^{2}}} & & & 0 \\ & & {\ddots} & \\ 0 & & & \frac{1}{{\rho_{D}^{2}}} \end{array}\right) + \sum\limits_{u \in O(i)}\frac{\phi_{u} + \overline{u}^{\top} \overline{u}}{\tau^{2}} \right)^{-1} $$
(42)
$$ \overline{i} = \psi_{i} \left( \sum\limits_{u \in O(i)}\frac{r_{ui}\overline{u}}{\tau^{2}} \right) $$
(43)

where O(i) denotes the set of users that have rated item i.

The variances σl, ρl, and τ can also be estimated and learned by the model. By differentiating (39) with respect to σl, ρl, and τ, setting the derivatives to zero, and solving for the optimal parameters, we obtain (44), (45), and (46).

$$ {\sigma_{l}^{2}} = \frac{1}{V-1}\sum\limits_{u = 1}^{V}(\phi_{u})_{ll} + \overline{u_{l}}^{2} $$
(44)
$$ {\rho_{l}^{2}} = \frac{1}{J-1}\sum\limits_{i = 1}^{J}(\psi_{i})_{ll} + \overline{v_{l}}^{2} $$
(45)
$$ \tau^{2} = \frac{1}{K-1}\sum\limits_{(u,i)}r_{ui}^{2}-2r_{ui}\overline{u}^{\top} \overline{i} + trace[(\phi_{u} + \overline{u}^{\top} \overline{u})(\psi_{i} + \overline{i}^{\top} \overline{i})] $$
(46)

1.3 Rating prediction in VB

Once the optimal approximation for P(U, IR) is calculated, that is, Q(U, I), it can be used to predict future interactions between users and the system. For this, given a matrix R, we find that the distribution of a new rating is the following (47):

$$ P(\hat{r}_{ui}\mid R) \sim \int P(r_{ui}\mid u^{\top} i,\tau) Q(U,I) dUdI = \int N(\hat{r}_{ui}\mid u^{\top} i, \tau^{2})Q(U,I)dUdI $$
(47)

Therefore, we can obtain its mean and, hence, the estimated or predicted rating:

$$ \mathbb{E}(\hat{r}_{ui}\mid R) = \int \int \hat{r}_{ui} N(\hat{r}_{ui}\mid u^{\top} i,\tau^{2}) Q(U,I) dUdId\hat{r}_{ui} $$
(48)

Finally, when we apply Fubini’s theorem, we obtain the following formulation:

$$ \mathbb{E}(\hat{r}_{ui}\mid R) = \int \underbrace{\int \hat{r}_{ui} N(\hat{r}_{ui}\mid u^{\top} i,\tau^{2}) d\hat{r}_{ui}}_{u^{\top} i} Q(U,I) dUdI = \mathbb{E}(u^{\top} i) = \overline{u}^{\top} \overline{i} $$
(49)

1.4 Standard deviation of predicted rating in VB

We use the following explicit formulation for the standard deviation, derived by using mean-field variational inference. First, we use the general form to compute the standard deviation when estimating rating \(\hat {r}_{ui}\) for user u and item i:

$$ Var(\hat{r}_{ui}\mid R) = \mathbb{E}(\hat{r}_{ui}^{2}\mid R) - \mathbb{E}(\hat{r}_{ui}\mid R)^{2} $$
(50)

Considering the formulation presented in (49), we know that \(\mathbb {E}(\hat {r}_{ui}\mid R)^{2} = (\overline {u}^{\top } \overline {i})^{2} = \overline {i}^{\top } \overline {u} \overline {u}^{\top } \overline {i}\). Hence:

$$\begin{array}{@{}rcl@{}} \mathbb{E}(\hat{r}_{ui}^{2}\mid R) &=& \int \int \hat{r}_{ui}^{2}N(\hat{r}_{ui}\mid u^{\top} i,\tau^{2})Q(U,I)dUdId\hat{r}_{ui}\\ &=& \int \underbrace{\int \hat{r}_{ui}^{2}N(\hat{r}_{ui}\mid u^{\top} i,\tau^{2})d\hat{r}_{ui}}_{\mathbb{E}(r_{ui}^{2}) = \tau^{2}-\mathbb{E}(r_{ui})^{2} = \tau^{2} + i^{\top} uu^{\top} i} Q(U,I)dUdV\\ &=& \mathbb{E}(\tau^{2} + i^{\top} uu^{\top} i)\\ &=& \tau^{2} + \mathbb{E}(i^{\top} uu^{\top} i)\\ &=& \tau^{2} + trace((\phi_{u} - \overline{u} \overline{u}^{\top} )(\psi_{i} - \overline{i} \overline{i}^{\top} )) \end{array} $$
(51)

And therefore:

$$ Var(\hat{r}_{ui}) = Var(\hat{r}_{ui}\mid R) = \tau^{2} + \text{trace}\left( \left( \phi_{u} + \overline{u} \overline{u}^{\top}\right)\left( \psi_{i} + \overline{i} \overline{i}^{\top}\right)\right) - \overline{i}^{\top} \overline{u} \overline{u}^{\top} \overline{i} $$
(52)

which gives us an explicit estimation of the deviation (uncertainty) on the predicted rating \(\hat {r}_{ui}\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Mesas, R.M., Bellogín, A. Exploiting recommendation confidence in decision-aware recommender systems. J Intell Inf Syst 54, 45–78 (2020). https://doi.org/10.1007/s10844-018-0526-3

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10844-018-0526-3

Keywords

Navigation