Abstract
We use machine learning methods to create a comprehensive measure of credit risk based on qualitative information disclosed in conference calls and in management’s discussion and analysis section of the 10-K. In out-of-sample tests, we find that our measure improves the ability to predict credit events (bankruptcies, interest spreads, and credit rating downgrades), relative to credit risk measures developed by prior research (e.g., z-score). We also find our measure based on conference calls explains within-firm variation in future credit events; however, we find little evidence that the measures of credit risk developed by prior research explain within-firm variation in credit risk. Our measure has utility for both academics and practitioners, as the majority of firms do not have readily available measures of credit risk, such as actively-traded CDS or credit ratings. Our study also adds to the growing body of research using machine-learning methods to gather information from conference calls and MD&A to explain key outcomes.
Similar content being viewed by others
Notes
We define quantitative signals as signals that are based on numerical data (e.g., returns). EDF, which is derived from Merton (1974), uses market-based numerical information (e.g., stock price volatility) to estimate the likelihood of firm default. EDF is likely a function of quantitative and qualitative information, given that market-based information is a function of both qualitative and quantitative information.
Hillegeist et al. (2004) suggest that EDF is a more comprehensive measure of credit risk, due to its higher “information content” than the Altman Z-Score and Ohlson O-Score. However, Hillegeist et al. (2004) caution that the information content scores of these three measures are still quite low. Hillegeist et al. (2004) provide evidence that EDF alone yields a 12% pseudo-R2 when predicting future bankruptcies while the O-Score, which is the next best measure in their paper, yields a 10% pseudo-R2. In contrast, we find that one of our measures based on conference calls yields a pseudo-R2 of 34.2% when predicting future bankruptcies while the O-score, which is the next best measure in our empirical tests, yields a pseudo-R2 of 10.9%.
Since expressing doubt that that the firm will continue as a going-concern is a dichotomous signal, the going-concern disclosure employed by Mayew et al. (2015) cannot be used to detect high or meaningful increases in credit risk for firms that are not on the verge of bankruptcy. Mayew et al. (2015) find that explicit discussion of the firm’s ability to continue as a going-concern is very rare; only approximately 3% of all firm-years and only 39% of firms that eventually file for bankruptcy explicitly discuss their ability to continue as a going-concern. In contrast, our machine-learning methods can identify language that is implicitly (rather than only explicitly) related to credit risk, leading to a continuous measure of credit risk that can be applied to a broader set of firms to measure meaningful levels of or changes in credit risk. This comment is not a critique of Mayew et al. (2015), because their aim is not to develop a measure of credit risk but to provide empirical evidence to inform the debate on whether the FASB should require management to assess and disclose the entity’s ability to continue as a going-concern in the MD&A.
For example, Mayew et al. (2015) creates a dictionary of language that questions the firm’s ability to continue as a going-concern. We provide evidence that the machine-learning methods extract a wider range of words associated with firm credit risk (e.g., firm performance, debt, liquidity, industry), which are distinct from the explicit discussion of the firm’s ability to continue as a going-concern.
Conversations with personnel of both Moody’s and S&P confirm that listening to or engaging in firms’ quarterly conference calls is one of several key inputs to the rating process and is considered a mandatory component of an analyst’s research function.
Research also uses static dictionaries to measure disclosure characteristics, such as tone (Henry 2008; Tetlock et al. 2008; Loughran and McDonald 2011; Price et al. 2012; Henry and Leone 2016; Davis et al. 2012; Feldman et al. 2010), firm risk (Campbell et al. 2014; Kravet and Muslu 2013), or the extent of forward-looking information (Li 2010; Muslu et al. 2014). Other studies use summary measures, such as FOG, BOG, word length, and file size, to measure disclosure readability (Li 2008; Bonsall et al. 2017c; Loughran and McDonald 2014). Still other studies use more sophisticated techniques, such as Latent Dirichlet Allocation (i.e., LDA), to summarize information in disclosures (Dyer et al. 2017; Huang et al. 2018; Bao and Datta 2014). Generally, these methods are not easily adapted to alternative contexts.
We thank Michael Roberts for providing the Compustat-Dealscan linking table, available on WRDS. Refer to Chava and Roberts (2008) for additional details.
We thank Lynn LoPucki for providing his data, available at http://lopucki.law.ucla.edu/index.htm.
Disclosures may differ for firms with public and private debt. Research suggests that firms with lower disclosure quality (e.g., Bharath et al. 2008; Dhaliwal et al. 2011) are more likely to access the private debt market. Vashishtha (2014) shows that borrowers with bank debt are less likely to issue earnings forecasts following a covenant violation. Christensen et al. (2019) find that the likelihood of issuing non-GAAP earnings decreases following a covenant violation. Despite the potential differences in disclosure for firms with public and private debt, we provide evidence that TCR Score explains within-firm variation in credit risk for both firms with private and public debt.
Firms that hold conference calls may fundamentally differ from those that do not hold conference calls. Therefore there may be a selection bias when it comes to the conference call results. Note that 42.5% of all firm-quarters on Compustat with public equity and positive total assets host conference calls between 2002 and 2016 and 65.3% of the firms in our MD&A sample hold at least one conference call during the year.
An alternative approach would be to directly train the model to predict credit events, such as bankruptcies or credit rating downgrades; however, these events occur somewhat infrequently, possibly reducing our ability to adequately train the model. Another alternative approach would be to train the model on other outcomes, such as interest spreads in loan contracts; however, these contracts are only observed immediately after contract negotiation for creditworthy firms. Therefore it is less likely that this approach will identify language that is useful if a firm’s credit risk significantly deteriorates after contract inception.
In an additional robustness test, we change the window over which we measure the average CDS spread to be −5 to +5 and − 45 to +45 and find qualitatively similar results. We tabulate and report these results in the Online Appendix.
We estimate additional robustness tests to ensure our results are not sensitive to the requirement of requiring phrases (NGRAMS) to be included in at least 10 conference calls. Specifically, we re-estimate TCR Score for the conference call and MD&A by requiring the NGRAM to appear in at least (1) five disclosures or (2) 20 disclosures. We find qualitatively similar results to those reported in the manuscript. We report these results in the Online Appendix.
Supervised LDA differs from unsupervised LDA. The former identifies topics in relation to a dependent variable and is better suited for prediction. The latter identifies topics without respect to a dependent variable and is better for text categorization. See Blei and McAuliffe (2007) for more information on the differences.
We recognize that the initiation of CDS trading may change firms’ disclosures, although the literature provides mixed evidence (Martin and Roychowdhury 2015; Kim et al. 2018; Kang et al. 2020). We are not aware of any evidence that firms alter the information provided to market participants in their conference calls post-CDS trade initiation. Importantly, if the disclosures of non-CDS firms differ significantly from those of CDS firms, the ability of the TCR Score measures to capture credit risk among firms without CDS spreads would be diminished, reducing the power of our tests performed on non-CDS firms.
Machine-learning methods reduce the effects of stationarity assumptions when compared to other textual analysis methods. For example, dictionary approaches, which are often constructed using simple word counts from researcher-generated word lists, are also subject to stationarity assumptions. To reduce the effects of stationarity for dictionary methods, the researcher would need to carefully evaluate disclosures to identify words that should no longer be included in the list and words that should be added. To avoid potential researcher bias, multiple researchers would be required to undertake this process to root out idiosyncratic biases. When dealing with a construct like credit risk, this process could involve a significant amount of time and potentially reduce the timeliness of the dictionary. In contrast, ML methods are only constrained by computer processing resources. In addition, machine-learning methods are not inherently subject to researcher biases that could influence the choice of words associated with credit risk.
Although unlikely, managers could identify the words and weightings that predict credit risk using our methods with the intent to avoid their use during conference calls, 10-Ks, and other disclosures. We expect this to be less possible during conferences calls than with carefully vetted 10-K reports, given that managers do not control analysts’ questions during the Q&A session of the conference call. For example, managers might find it difficult to stop analysts from asking about performance and debt structure, which are two key topics that are associated with credit risk (see the Online Appendix).
To decrease the costs of replication, we provide TCR Score through our websites. If future researchers would like to recalculate our measure or apply these methods to another variable, they can access SVR, sLDA, and random forest regression trees using the following links: http://svmlight.joachims.org/, https://pypi.org/project/slda/, http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.RandomForestRegressor.html
The words and phrases we can categorize represent 89.02% (68.61%) of the total importance of the top 200 most important words and phrases identified by the random forest (SVR) model.
We estimate TCR Score using factor analysis based on all conference call transcripts available over the past 365 days. In an untabulated analysis, we note that factor analysis generates a factor with a mean equal to zero and a standard deviation equal to one.
In an additional untabulated analysis for this sample, we find that the TCR Score variables are negatively correlated with firm performance (ROA), growth (MTB and Sales Growth), size (Size), stock returns (Returns), and analyst following (Analyst Following). We also note that TCR Score is positively correlated with firm leverage (Leverage) and performance volatility (Std[Returns], Std[Income], Std[Revenue]).
For brevity, we combine percentile rankings by summing the number of observations and total future bankruptcies that fall in each group below the 96th percentile (e.g., the 80th through 89th percentiles are grouped together). Additionally, to facilitate comparison, we multiply ZSCORE by negative one for this analysis only so that greater values of ZSCORE indicate greater credit risk. In addition, Est Credit Rating consists of discrete categories for each S&P credit rating and therefore does not have a value for each percentile. For this reason, there are no observations that fall in the 50th–59th and 96th–97th percentiles.
The parameters from Altman (1968) and Ohlson (1980) could be updated to provide a better credit risk measure. We re-estimated the parameters for the ZSCORE and OSCORE in case the parameter estimates have changed since the publication of Altman (1968) and Ohlson (1980). Specifically, we estimated rolling regressions, where the dependent variable is an indicator variable equal to one if the firm enters bankruptcy over the subsequent two years, and we included all of the financial ratios from Altman (1968) and Ohlson (1980) as independent variables. For each quarter in the sample, we estimated this regression using all available data (beginning in 1985) up to that point in time. In addition to re-estimating the coefficients from the models of Altman (1968) and Ohlson (1980), we also used rolling regressions to update the coefficient estimates for estimated credit rating (Est Credit Rating) from Barth et al. (2008) and Beatty et al. (2008). After replacing ZSCORE, OSCORE, and Est Credit Rating with the updated variables, all regression results reported in Tables 4, 5, 6, 7, 8, and 9 are qualitatively similar. These results can be found in the online appendix.
Kravet and Muslu (2013) also examine analyst forecast dispersion. Although we do not include a specific control for analyst forecast dispersion in our primary tests due to data constraints, we explicitly control for the standard deviation of returns [Std(Returns)], earnings [Std(Income)], and sales [Std(Revenue)]. In additional robustness tests, we explicitly control for analyst forecast dispersion and find qualitatively similar results.
We follow the same calculation using the Pseudo-R2 for nonlinear probit models.
For quarterly (annual) tests, we include year-quarter (year) fixed effects.
In an additional untabulated robustness test, we use a hazard model to examine future bankruptcy and find that the TCR Score measures based on the conference call significantly predict bankruptcy over the subsequent one and three years. We report these results in the Online Appendix.
To reduce the skewness of ZSCORE and OSCORE, in an untabulated analysis, we take the natural log of these variables and continue to find a statistical and economically significant relation between the TCR Score variables and future credit events. These results include the same control variables included in Tables 5, 6, 7, 8, 9. As an alternative test, we winsorize ZSCORE and OSCORE at the 5% and 95% level and find similar results.
The results are qualitatively similar when using raw Spread. Following prior research, we take the natural log of Spread to reduce the effects of outliers.
In additional robustness tests, we replace Est Credit Rating with the estimated credit rating measures developed by Alp (2013) and Baghai et al. (2014). We find qualitatively similar results to those reported in Tables 5, 6, 7, 8, 9. The correlation between Est Credit Rating and the measure produced by Alp (2013) is equal to 0.781, and the correlation between Est Credit Rating and the measure produced by Baghai et al. (2014) is equal to 0.827.
In an untabulated analysis, we perform a nonparametric test examining the association between each credit risk proxy and future credit rating downgrades. Similar to results reported in Table 3, we find nonparametric evidence that TCR Score outperforms other credit risk proxies in predicting credit rating downgrades.
References
Agarwal, S., and R. Hauswald. 2010. Distance and private information in lending. The Review of Financial Studies 23 (7): 2757–2788.
Allison, P. 2009. Fixed effect regression models. Sage Publications.
Alp, A. 2013. Structural shifts in credit rating standards. Journal of Finance 68 (6): 2435–2470.
Altman, E. 1968. Financial ratios, discriminant analysis and the prediction of corporate bankruptcy. Journal of Finance 23: 589–609.
Baghai, R., B. Becker, and A. Tamayo. 2014. Have rating agencies become more conservative? Implications for capital structure and debt pricing. Journal of Finance 69 (5): 1961–2005.
Bao, Y., and A. Datta. 2014. Simultaneously discovering and quantifying risk types from textual risk disclosures. Management Science 60 (6): 1371–1391.
Barron’s. (2017). Artificial Intelligence, Explained. Available at:https://www.barrons.com/articles/sponsored/artificial-intelligence-explained-1508530169
Barth, M., L. Hodder, and S. Stubben. 2008. Fair value accounting for liabilities and own credit risk. The Accounting Review 83 (3): 629–664.
Beatty, A., J. Weber, and J. Yu. 2008. Conservatism and debt. Journal of Accounting and Economics 45 (2–3): 154–174.
Beaver, W., C. Shakespeare, and M. Soliman. 2006. Differential properties in the ratings of certified versus non-certified bond-rating agencies. Journal of Accounting and Economics 42 (3): 303–334.
Bharath, S., J. Sunder, and S. Sunder. 2008. Accounting quality and debt contracting. The Accounting Review 83 (1): 1–28.
Black, F., and Scholes, M. 1973. The pricing of options and corporate liabilities. Journal of Political Economy 81 (3): 637–654.
Blanco, R., S. Brennan, and I. Marsh. 2005. An empirical analysis of the dynamic relation between investment-grade bonds and credit default swaps. The Journal of Finance 60 (5): 2255–2281.
Blei, D., and J. McAuliffe. 2007. Supervised topic models. Advances in neural infomration processing systems 20: 1–128.
Bonsall, S., E. Holzman, and B. Miller. 2016. Managerial ability and credit risk assessment. Management Science 63 (5): 1425–1449.
Bonsall, S., K. Koharki, and M. Neamtiu. 2017a. When do differences in credit rating methodologies matter? Evidence form high information uncertainty borrowers. The Accounting Review 92 (4): 53–79.
Bonsall, S., K. Koharki, and L. Watson. 2017b. Deciphering tax avoidance: Evidence from credit rating disagreements. Contemporary Accounting Research 43 (2): 818–848.
Bonsall, S., A. Leone, B. Miller, and K. Rennekamp. 2017c. A plain English measure of financial reporting readability. Journal of Accounting and Economics 63: 329–357.
Bozanic, Z., and P. Kraft. 2015. Qualitative corporate disclosures and credit analysts’ soft rating adjustments. Working paper, Florida State University.
Bozanic, Z., D.T. Roulstone, and A. Van Buskirk. 2018. Management earnings forecasts and other forward-looking statements. Journal of Accounting and Economics 65 (1): 1–20.
Breiman, L. 2001. Random forests. Machine Learning 45: 5–32.
Campbell, J., H. Chen, D. Dhaliwal, H. Lu, and L. Steele. 2014. The information content of mandatory risk factor disclosures in corporate filings. Review of Accounting Studies 19: 396–455.
Campbell, D., M. Loumioti, and R. Wittenberg-Moerman. 2019. Making sense of soft information: Interpretation bias and loan quality. Journal of Accounting and Economics 68 (2–3): 101240.
Cecchini, M., H. Aytug, G. Koehler, and P. Pathak. 2010. Detecting management fraud in public companies. Management Science 56 (7): 1146–1160.
Chava, S., and M. Roberts. 2008. How does financing impact investment? The role of debt covenants. The Journal of Finance 63 (5): 2085–2121.
Chen, S., B. Miao, and T. Shevlin. 2015. A new measure of disclosure quality: The level of disaggregation of accounting data in annual reports. Journal of Accounting Research 53 (5): 1017–1054.
Christensen, T.E., Pei, H., Pierce, S.R. and Tan, L. 2019. Non-GAAP reporting following debt covenant violations. Review of Accounting Studies 24 (2): 629–664.
Costello, A., Down, A. & Mehta, M. (2019). Machine + man: A field experiment on the role of discretion in augmenting AI-based lending models. Working paper.
Das, S., M. Kalimipalli, and S. Nayak. 2014. Did CDS trading improve the market for corporate bonds? Journal of Financial Economics 111 (2): 495–525.
Davis, A., J. Piger, and L. Sedor. 2012. Beyond the numbers: Measuring the information content of earnings press release language. Contemporary Accounting Research 29 (3): 845–868.
Dhaliwal, D.S., Khurana, I.K., and Pereira, R. 2011. Firm disclosure policy and the choice between private and public debt. Contemporary Accounting Research 28 (1): 293–330.
Dyer, T., M. Lang, and L. Stice-Lawrence. 2017. The evolution of 10-K textual disclosure: Evidence from latent Dirichlet allocation. Journal of Accounting and Economics 64 (2–3): 221–245.
Feldman, R., S. Govindaraj, J. Livnat, and B. Segal. 2010. Management’s tone change, post earnings announcement drift and accruals. Review Accounting Studies 15: 915–953.
Frankel, R., J. Jennings, and J. Lee. 2016. Using unstructured and qualitative disclosures to explain accruals. Journal of Accounting and Economics 45 (2): 209–227.
Frankel, R., Jennings, J., & Lee, J. (2019). Assessing the relative explanatory power of narrative content measures using conference calls, earnings predictions and analyst revisions. Working paper.
Ganguin, B., and J. Bilardello. 2005. Fundamentals of corporate credit analysis. New York: McGraw-Hill.
Greene, W. 2004. The behaviour of the maximum likelihood estimator of limited dependent variable models in the presence of fixed effects. The Econometrics Journal 7: 98–119.
Ham, C., and K. Koharki. 2016. The association between corporate general counsel and firm credit risk. Journal of Accounting and Economics 61: 274–293.
Henry, E. 2008. Are investors influenced by how earnings press releases are written? International Journal of Business Communication 45 (4): 363–407.
Henry, E., and A. Leone. 2016. Measuring qualitative information in capital markets research: Comparison of alternative methodologies to measure disclosure tone. The Accounting Review 91 (1): 153–178.
Hillegeist, S., E. Keating, D. Cram, and K. Lundstedt. 2004. Assessing the probability of bankruptcy. Review of Accounting Studies 9 (1): 5–34.
Hollander, S., M. Pronk, and E. Roelofsen. 2010. Does silence speak? An empirical analysis of disclosure choices during conference calls. Journal of Accounting Research 48 (3): 531–563.
Huang, A., R. Lehavy, A. Zang, and R. Zheng. 2018. Analyst information discovery and interpretation roles: A topic modeling approach. Management Science 64 (6): 2833–2855.
Hull, J., M. Predescu, and A. White. 2004. The relationship between credit default swap spreads, bond yields, and credit rating announcements. Journal of Banking and Finance 28: 2789–2811.
Kang, J.K., Williams, C., and Wittenberg-Moerman, R. (2020). CDS trading and non-relationship lending dynamics. Review of Accounting Studies, forthcoming.
Kim, I., and D. Skinner. 2012. Measuring securities litigation risk. Journal of Accounting and Economics 53 (1–2): 290–310.
Kim, J., P. Shroff, D. Vyas, and R. Wittenberg-Moerman. 2018. Credit default swaps and managers’ voluntary disclosure. Journal of Accounting Research 56 (3): 953–988.
Kim, E., Sethuraman, M., & Steffen, T. (2019). The informational role of investor relations officers: Evidence from the debt market. Working paper.
Kravet, T., and V. Muslu. 2013. Textual risk disclosures and investors’ risk perceptions. Review of Accounting Studies 18: 1088–1122.
Lee, J. 2016. Can investors detect managers' lack of spontaneity? Adherence to predetermined scripts during earnings conference calls. The Accounting Review 91 (1): 229–250.
Li, F. 2008. Annual report readability, current earnings, and earnings persistence. Journal of Accounting and Economics 45: 221–447.
Li, F. 2010. The information content of forward-looking statements in corporate filings–a naïve Bayesian machine learning approach. Journal of Accounting Research 48 (5): 1049–1102.
Liberti, J., and M. Petersen. 2019. Information: Hard and soft. The Review of Corporate Finance Studies 8 (1): 1–41.
Loughran, T., and B. McDonald. 2011. When is a liability not a liability? Textual analysis, dictionaries, and 10-Ks. The Journal of Finance 66 (1): 35–65.
Loughran, T., and B. McDonald. 2014. Measuring readability in financial disclosures. Journal of Finance 69: 1643–1671.
Manela, A., and A. Moreira. 2017. News implied volatility and disaster concerns. Journal of Financial Economics 123: 137–162.
Martin, X., and S. Roychowdhury. 2015. Do financial market developments influence accounting practices? Credit default swaps and borrowers’ reporting conservatism. Journal of Accounting and Economics 59 (1): 80–104.
Mayew, W.J. 2008. Evidence of management discrimination among analysts during earnings conference calls. Journal of Accounting Research 46 (3): 627–659.
Mayew, W.J., M. Sethuraman, and M. Venkatachalam. 2015. MD&A Disclosure and the Firm’s ability to continue as a going concern. The Accounting Review 90 (4): 1621–1651.
Merton, R. 1974. On the pricing of corporate debt: The risk structure of interest rates. The Journal of Finance 29 (2): 449–470.
Morgan, D. 2002. Risk and uncertainty in an opaque industry. American Economic Review 92 (4): 874–888.
Muslu, V., S. Radhakrishnan, K.R. Subramanyam, and D. Lim. 2014. Forward-looking MD&a disclosures and the information environment. Management Science 61 (5): 931–948.
Norden, L., and M. Weber. 2004. Informational efficiency of credit default swap and stock markets: The impact of credit rating announcements. Journal of Banking and Finance 28: 2813–2843.
Ohlson, J. 1980. Financial ratios and the probabilistic prediction of bankruptcy. Journal of Accounting Research 19: 109–131.
Plumlee, M., Y. Xie, M. Yan, and J. Yu. 2015. Bank loan spread and private information: Pending approval patents. Review of Accounting Studies 20 (2): 593–638.
Price, S., J. Doran, D. Peterson, and B. Bliss. 2012. Earnings conference calls and stock returns: The incremental informativeness of textual tone. Journal of Banking & Finance 36: 992–1011.
Sethuraman, M. 2019. The effect of reputation shocks to rating agencies on corporate disclosures. The Accounting Review 94 (1): 299–326.
Standard & Poor’s. 2007. A guide to the loan market. New York: Standard & Poor’s.
Standard & Poor’s. (2015). Form NRSRO. Available at: http://www.standardandpoors.com
Tetlock, P., M. Saar-Tsechansky, and M. Sofus. 2008. More than words: Quantifying language to measure firms’ fundamentals. The Journal of Finance 63 (3): 1437–1467.
Vashishtha, R. 2014. The role of bank monitoring in borrowers’ discretionary disclosure: Evidence from covenant violations. Journal of Accounting and Economics 57 (2–3): 176–195.
Watts, R. 2003. Conservatism in accounting part I: Explanations and implications. Accounting Horizons 17 (3): 207–221.
Zhang, J. 2008. The contracting benefits of accounting conservatism to lenders and borrowers. Journal of Accounting and Economics 45: 27–54.
Zhang, B., H. Zhou, and H. Zhu. 2009. Explaining credit default swap spreads with the equity volatility and jump risks of individual firms. Review of Financial Studies 22 (12): 5099–5131.
Acknowledgments
The authors thank Sam Bonsall, Mark Bradshaw, Bill McDonald, Scott Richardson, Terry Shevlin, Lakshmanan Shivakumar, Irem Tuna, and workshop participants at Brigham Young University, the CUHK Accounting Research Conference, the Duke/UNC Fall Camp, the Frankfurt School of Finance and Management, London Business School, the Swiss Accounting Research Alpine Camp, the University of California - Irvine, the University of Notre Dame, and the University of Iowa for helpful comments. The authors thank the Mendoza College of Business, Olin Business School, Krannert School of Management, and Terry College of Business for financial support. Data is available upon request from the authors.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
ESM 1
(PDF 2055 kb)
Appendix
Appendix
Rights and permissions
About this article
Cite this article
Donovan, J., Jennings, J., Koharki, K. et al. Measuring credit risk using qualitative disclosure. Rev Account Stud 26, 815–863 (2021). https://doi.org/10.1007/s11142-020-09575-4
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11142-020-09575-4