Latent variables analysis in structural models: A New decomposition of the kalman smoother

https://doi.org/10.1016/j.jedc.2021.104097Get rights and content

Abstract

Standard latent variable analysis in structural state space models decomposes latent variables into contributions of structural shocks (shock decomposition), or into contributions of the observable variables (data decomposition). We propose to link the shock decomposition of the latent variables and the data decomposition of the structural shocks in what we call the double decomposition. This decomposition allows us to better gauge the influence of data on latent variables by taking into account the transmission mechanism of each type of shock. We show the usefulness of the double decomposition by analyzing the role of observable variables in estimating the output gap in two models and by studying the role of news in revisions of the output gap.

Introduction

Kalman filter methods are commonly used for the estimation and analysis of macroeconomic models with a state space representation. Researchers usually decompose the estimated latent vector into contributions of the estimated structural shocks using the shock decomposition. [6] and [1] highlight that the connection between incoming data and latent variables can be opaque in complex models and propose to decompose the estimated latent vector in terms of contributions of the observable variables using the so-called data decomposition. However, we argue that a better understanding of the relationship between the observable variables and the latent vector may be achieved by linking the data decomposition of structural shocks and the shock decomposition of the latent vector in what we label as the double decomposition. Because the double decomposition traces the influence of the data on latent variables through estimates of the structural shocks and their subsequent propagation, it provides a causal narrative of the linkage between observable and latent variables. This way of analyzing the estimated path of latent variables can be particularly illuminating when the focus is on inference regarding highly theoretical constructs, such as the natural rate of interest or the “flex-price” output gap, where the relation between observable and latent variable is not intuitive and depends heavily on the model’s theoretical structure. Moreover, the double decomposition can reconcile puzzling or unintuitive features of each standard decomposition when taken separately by providing a clear link between the data decomposition and the structural shocks.

We first show how the double decomposition can be used to study the behavior of a latent variable—the output gap—in a simple model by [5]. In this model, the data decomposition shows that the estimated path of the output gap is almost entirely explained by news on inflation, with a very small role for news on GDP growth. Working through the logic of the double decomposition traces this outcome to the relatively high sensitivity of inflation to real activity in this model. In particular, news on GDP growth is associated with a configuration of shocks with offsetting implications for the output gap, attenuating the informativeness of GDP growth. Second, we illustrate the value of the double decomposition for practitioners studying the output gap in a model where inference is more complex—a version of the model presented in [3]. The data decomposition shows that the estimated path of the output gap in this model is explained by a disparate set of observables, including indicators of real activity, such as consumption growth, and financial indicators, such as the federal funds rate and the corporate bond spread. The double decomposition explains the role of consumption growth, the funds rate, and the spread by showing that forecast errors in these variables are highly informative about permanent technology shocks and shocks to firm risk (effectively, shocks to the spread between the return on capital and the return on risk-free assets), which are the primary drivers of the output gap according to the shock decomposition. Importantly, our discussion of the double decomposition results shows that the close link between the observables most informative about the output gap and shocks to permanent productivity and firm risk emerges quite transparently from core dynamic features of the model, such as the nature of the financial frictions in the model and consumption smoothing by households.

Finally, we illustrate the value of the double decomposition for interpreting the [3] model’s reaction to incoming news, examining in detail the effects of news flow at the beginning of 2014. In this case, shock decompositions of the revisions to the forecast and to the estimate of the output gap show large positive contributions of productivity shocks, despite disappointing news on labor productivity. Moreover, the data decomposition of the revisions is dominated by news about consumption growth, despite disappointing investment and GDP news. Both responses stem from the strong association between consumption growth and permanent productivity shocks, a fact made obvious by the double decomposition.

The rest of the paper is organized as follows. Section 2 overviews the analysis of the latent vector in structural models. Section 3 then illustrates how the double decomposition works in a simple 3-equation New Keynesian model. We then turn to a more complex case, a medium-scale DSGE model, in Section 4. Section 5 concludes.

Section snippets

Latent variable analysis in structural models

Traditionally, empirical macroeconomic researchers focus on the so-called shock decomposition, which decomposes the latent variables into contributions of structural shocks (see Appendix A.2). Recently, exploiting the linearity of the model, [1] and [6] propose the data decomposition, which traces the independent effect of each observable on the estimated latent vector (see Appendix A.3). While the shock decomposition does not provide a link between the data and the latent vector, the data

A Simple Model demonstration

In this section, we provide an illustrative application of the double decomposition using a small-scale model. By using the double decomposition, we are able to better understand the economic mechanisms in this model by which some variables, such as inflation, are highly informative about the output gap, while others, such as GDP growth, are less so.

We illustrate how the double decomposition works using an estimated version of the canonical New Keynesian model presented in [5], which includes

The model

To further illustrate the utility of the double decomposition, we now turn to a larger model along the lines of the workhorse models used in academia and central banks, which features a larger set of observable variables and more complex transmission dynamics. In particular, we use an estimated version of the model originally developed by [3] (DGS). Broadly speaking, the DGS model extends the baseline Smets and Wouters, 2007 model with financial frictions on the firm side and a time-varying

Conclusion

In this paper, we advocate chaining the decomposition of shocks into contributions from forecast errors to the shock decomposition of the latent vector in order to better understand model inference about latent variables. This double decomposition allows us to gauge the influence of data on the path of latent variables, like the data decomposition. However, by taking into account the data decomposition of the structural shocks and their transmission, we can highlight the economic structure

References (7)

  • F. Smets et al.

    Shocks and frictions in US business cycles: a bayesian DSGE approach

    American Economic Review

    (2007)
  • Andrle, M., 2013. Understanding dsge filters in forecasting and policy analysis. IMF Working Paper...
  • R. Barsky et al.

    The natural rate of interest and its usefulness for monetary policy

    American Economic Review

    (2014)
There are more references available in the full text version of this article.

Cited by (2)

  • A computationally efficient unscented Kalman smoother for ameliorated tracking of subatomic particles in high energy physics experiments

    2023, Computer Physics Communications
    Citation Excerpt :

    Kalman smoothing [1–8] plays an important role in the field of High Energy Physics (HEP) experiments [9] to improve the accuracy of the position and momentum estimates that are essential to determine a particle's track over time.

We thank Mariano Kulish and seminar participants at the ECB, the Banque de France, the 4th Workshop on Empirical Macroeconomics of the Macroeconomics, Policy and Econometrics Research group of Ghent University, the 8th Conference on Growth and Business Cycles in Theory and Practice at the University of Manchester, RCEF Conference in Rimini, the 12th Dynare Conference, the XIX Annual Inflation Targeting Seminar of the Banco Central do Brasil, and 44th Simposio of the Spanish Economic Association. The views expressed in this paper are solely the responsibility of the authors and should not be interpreted as reflecting the views of the Board of Governors of the Federal Reserve System.

View full text