Lawrence Christiano is the Alfred W. Chase Chair in Business Institutions and a professor of economics at Northwestern University. He has been affiliated with the Bank since 1985 and is currently a research consultant. He has also taught at Carnegie Mellon University and the University of Chicago.
Larry received his B.A. in history and economics and M.A. in economics at the University of Minnesota. He then went on to earn his M.Sc. in econometrics and mathematical economics at the London School of Economics and his Ph.D. at Columbia University. His research has focused on macroeconomic theory, policy, and econometrics.
Larry’s work has been published in Journal of Economic Theory, American Economic Review, Review of Economics and Statistics, and numerous other journals. In addition to his work for the Minneapolis Fed, he has also served as a research consultant for the Fed’s Board of Governors and the Federal Reserve Banks of Cleveland and Chicago, and has been a visiting scholar at the International Monetary Fund and European Central Bank. He is the recipient of six National Science Foundation grants and is a Fellow of the Econometric Society.
The ﬁnancialization view is that increased trading in commodity futures markets is associated with increases in the growth rate and volatility of commodity spot prices. This view gained credence be-cause in the 2000s trading volume increased sharply and many commodity prices rose and became more volatile. Using a large panel dataset we constructed, which includes commodities with and with-out futures markets, we ﬁnd no empirical link between increased futures market trading and changes in price behavior. Our data sheds light on the economic role of futures markets. The conventional view is that futures markets provide one-way insurance by allowing outsiders, traders with no direct interest in a commodity, to insure insiders, traders with a direct interest. The data are not consistent with the conventional view and we argue that they point to an alternative mutual insurance view, in which all participants insure each other. We formalize this view in a model and show that it is consistent with key features of the data.
The Great Recession was particularly severe and has endured far longer than most recessions. Economists now believe it was caused by a perfect storm of declining home prices, a financial system heavily invested in house-related assets and a shadow banking system highly vulnerable to bank runs or rollover risk. It has lasted longer than most recessions because economically damaged households were unwilling or unable to increase spending, thus perpetuating the recession by a mechanism known as the paradox of thrift. Economists believe the Great Recession wasn’t foreseen because the size and fragility of the shadow banking system had gone unnoticed.
The recession has had an inordinate impact on macroeconomics as a discipline, leading economists to reconsider two largely discarded theories: IS-LM and the paradox of thrift. It has also forced theorists to better understand and incorporate the financial sector into their models, the most promising of which focus on mismatch between the maturity periods of assets and liabilities held by banks.
Policymakers concerned about rapid swings in commodity prices seek economic guidance about causal factors and future trends, but standard models—based on Harold Hotelling’s classic 1931 theory—are unable to explain actual data on price variability for a wide range of commodities. In this paper, we review this “Hotelling puzzle” and suggest modifications to current theory that may improve explanations of commodity price changes and provide better policy advice.
The United States is indisputably undergoing a financial crisis and is perhaps headed for a deep recession. Here we examine three claims about the way the financial crisis is affecting the economy as a whole and argue that all three claims are myths. We also present three underappreciated facts about how the financial system intermediates funds between households and corporate businesses. Conventional analyses of the financial crisis focus on interest rate spreads. We argue that such analyses may lead to mistaken inferences about the real costs of borrowing and argue that, during financial crises, variations in the levels of nominal interest rates might lead to better inferences about variations in the real costs of borrowing. Moreover, we argue that even if current increase in spreads indicate increases in the riskiness of the underlying projects, by itself, this increase does not necessarily indicate the need for massive government intervention. We call for policymakers to articulate the precise nature of the market failure they see, to present hard evidence that differentiates their view of the data from other views which would not require such intervention, and to share with the public the logic and evidence that burnishes the case that the particular intervention they are advocating will fix this market failure.
Why is inflation persistently high in some periods and low in others? The reason may be absence of commitment in monetary policy. In a standard model, absence of commitment leads to multiple equilibria, or expectation traps, even without trigger strategies. In these traps, expectations of high or low inflation lead the public to take defensive actions, which then make accommodating those expectations the optimal monetary policy. Under commitment, the equilibrium is unique and the inflation rate is low on average. This analysis suggests that institutions which promote commitment can prevent high inflation episodes from recurring.
This study analyzes two monetary economies, a cash-credit good model and a limited-participation model. In these models, monetary policy is made by a benevolent policymaker who cannot commit to future policies. The study defines and analyzes Markov equilibrium in these economies and shows that there is no time-inconsistency problem for a wide range of parameter values.
We introduce two modifications into the standard real business cycle model: habit persistence preferences and limitations on intersectoral factor mobility. The resulting model is consistent with the observed mean equity premium, mean risk free rate and Sharpe ratio on equity. The model does roughly as well as the standard real business cycle model with respect to standard measures. On four other dimensions its business cycle implications represent a substantial improvement. It accounts for (i) persistence in output, (ii) the observation that employment across different sectors moves together over the business cycle, (iii) the evidence of ‘excess sensitivity’ of consumption growth to output growth, and (iv) the ‘inverted leading indicator property of interest rates,’ that high interest rates are negatively correlated with future output.
This article investigates the business cycle implications of the planning phase of business investment projects. Time to plan is built into a Kydland-Prescott time-to-build model, which assumes that investment projects take four periods to complete. In the Kydland-Prescott time-to-build model, resources for these projects flow uniformly across the four periods; in the time-to-plan model, few resources are used in the first period. The investigation determines that incorporating time to plan in this way improves the model’s ability to account for three key features of U.S. business cycles: their persistence, or the fact that when output growth is above (or below) average, it tends to remain high (or low) for a few quarters; the fact that productivity leads hours worked over the business cycle; and the fact that business investment in structures and business investment in equipment lag output over the cycle.
We provide new evidence that models of the monetary transmission mechanism should be consistent with at least the following facts. After a contractionary monetary policy shock, the aggregate price level responds very little, aggregate output falls, interest rates initially rise, real wages decline by a modest amount, and profits fall. We compare the ability of sticky price and limited participation models with frictionless labor markets to account for these facts. The key failing of the sticky price model lies in its counterfactual implications for profits. The limited participation model can account for all the above facts, but only if one is willing to assume a high labor supply elasticity (2 percent) and a high markup (40 percent). The shortcomings of both models reflect the absence of labor market frictions, such as wage contracts or factor hoarding, which dampen movements in the marginal cost of production after a monetary policy shock.
We study a one-sector growth model which is standard except for the presence of an externality in the production function. The set of competitive equilibria is large. It includes constant equilibria, sunspot equilibria, cyclical and chaotic equilibria, and equilibria with deterministic or stochastic regime switching. The efficient allocation is characterized by constant employment and a constant growth rate. We identify an income tax-subsidy schedule that supports the efficient allocation as the unique equilibrium outcome. That schedule has two properties: (i) it specifies the tax rate to be an increasing function of aggregate employment, and (ii) earnings are subsidized when aggregate employment is at its efficient level. The first feature eliminates inefficient, fluctuating equilibria, while the second induces agents to internalize the externality.
The marginal cost of plant capacity, measured by the price of equity, is significantly procyclical. Yet, the price of a major intermediate input into expanding plant capacity, investment goods, is countercyclical. The ratio of these prices is Tobin’s q. Following convention, we interpret the fact that Tobin’s q differs from unity at all, as reflecting that there are diminishing returns to expanding plant capacity by installing investment goods (“adjustment costs”). However, the phenomenon that interests us is not just that Tobin’s q differs from unity, but also that its numerator and denominator have such different cyclical properties. We interpret the sign switch in their covariation with output as reflecting the interaction of our adjustment cost specification with the operation of two shocks: one which affects the demand for equity and another which shifts the technology for producing investment goods. The adjustment costs cause the two prices to respond differently to these two shocks, and this is why it is possible to choose the shock variances to reproduce the sign switch. These model features are incorporated into a modified version of a model analyzed in Boldrin, Christiano and Fisher (1995). That model incorporates assumptions designed to help account for the observed mean return on risk free and risky assets. We find that the various modifications not only account for the sign switch, but they also continue to account for the salient features of mean asset returns. We turn to the business cycle implications of our model. The model does as well as standard models with respect to conventional business cycle measures of volatility and comovement with output, and on one dimension the model significantly dominates standard models. The factors that help it account for prices and rates of return on assets also help it account for the fact that employment across a broad range of sectors moves together over the cycle.
We develop a model which accounts for the observed equity premium and average risk-free rate, without implying counterfactually high risk aversion. The model also does well in accounting for business-cycle phenomena. With respect to the conventional measures of business-cycle volatility and comovement with output, the model does roughly as well as the standard business-cycle model. On two other dimensions, the model’s business-cycle implications are actually improved. Its enhanced internal propagation allows it to account for the fact that there is positive persistence in output growth, and the model also provides a resolution to the “excess sensitivity puzzle” for consumption and income. Two key features of the model are habit persistence preferences and a multisector technology with limited intersectoral mobility of factors of production.
We investigate, by Monte Carlo methods, the finite sample properties of GMM procedures for conducting inference about statistics that are of interest in the business cycle literature. These statistics include the second moments of data filtered using the first difference and Hodrick-Prescott filters, and they include statistics for evaluating model fit. Our results indicate that, for the procedures considered, the existing asymptotic theory is not a good guide in a sample the size of quarterly postwar U.S. data.
We describe several methods for approximating the solution to a model in which inequality constraints occasionally bind, and we compare their performance. We apply the methods to a particular model economy which satisfies two criteria: It is similar to the type of model used in actual research applications, and it is sufficiently simple that we can compute what we presume is virtually the exact solution. We have two results. First, all the algorithms are reasonably accurate. Second, on the basis of speed, accuracy and convenience of implementation, one algorithm dominates the rest. We show how to implement this algorithm in a general multidimensional setting, and discuss the likelihood that the results based on our example economy generalize.
We find conditions for the Friedman rule to be optimal in three standard models of money. These conditions are homotheticity and separability assumptions on preferences similar to those in the public finance literature on optimal uniform commodity taxation. We show that there is no connection between our results and the result in the standard public finance literature that intermediate goods should not be taxed.
This paper develops the quantitative implications of optimal fiscal policy in a business cycle model. In a stationary equilibrium the ex ante tax rate on capital income is approximately zero. There is an equivalence class of ex post capital income tax rates and bond policies that support a given allocation. Within this class the optimal ex post capital tax rates can range from being close to i.i.d. to being close to a random walk. The tax rate on labor income fluctuates very little and inherits the persistence properties of the exogenous shocks and thus there is no presumption that optimal labor tax rates follow a random walk. The welfare gains from smoothing labor tax rates and making ex ante capital income tax rates zero are small and most of the welfare gains come from an initial period of high taxation on capital income.
This paper presents new empirical evidence to support the hypothesis that positive money supply shocks drive short-term interest rates down. We then present a quantitative, general equilibrium model which is consistent with this hypothesis. The two key features of our model are that (i) money shocks have a heterogeneous impact on agents and (ii) ex post inflexibilities in production give rise to a very low short-run interest elasticity of money demand. Together, these imply that, in our model, a positive money supply shock generates a large drop in the interest rate comparable in magnitude to what we find in the data. In sharp contrast to sticky nominal wage models, our model implies that positive money supply shocks lead to increases in the real wage. We report evidence that this is consistent with the U.S. data. Finally, we show that our model can rationalize a version of the Real Bills Doctrine in which the monetary authority accommodates technology shocks, thereby smoothing interest rates.
Several recent papers provide strong empirical support for the view that an expansionary monetary policy disturbance generates a persistent decrease in interest rates and a persistent increase in output and employment. Existing quantitative general equilibrium models, which allow for capital accumulation, are inconsistent with this view. There does exist a recently developed class of general equilibrium models which can rationalize the contemporaneous response of interest rates, output, and employment to a money supply shock. However, a key shortcoming of these models is that they cannot rationalize persistent liquidity effects. This paper discusses the basic frictions and mechanisms underlying this new class of models and investigates one avenue for generating persistence. We argue that once a simplified version of the model in Christiano and Eichenbaum (1991) is modified to allow for extremely small costs of adjusting sectoral flow of funds, positive money shocks generate long-lasting, quantitatively significant liquidity effects, as well as persistent increases in aggregate economic activity.
There is widespread agreement that a surprise increase in an economy’s money supply drives the nominal interest rate down and economic activity up, at least in the short run. This is understood as reflecting the dominance of the liquidity effect of a money shock over an opposing force, the anticipated inflation effect. This paper illustrates why standard general equilibrium models have trouble replicating the dominant liquidity effect. It also studies several factors which have the potential to improve the performance of these models.
This paper studies the quantitative properties of fiscal and monetary policy in business cycle models. In terms of fiscal policy, optimal labor tax rates are virtually constant and optimal capital income tax rates are close to zero on average. In terms of monetary policy, the Friedman rule is optimal—nominal interest rates are zero—and optimal monetary policy is activist in the sense that it responds to shocks to the economy.
This paper investigates the impact on aggregate variables of changes in government consumption in the context of a stochastic, neoclassical growth model. We show, theoretically, that the impact on output and employment of a persistent change in government consumption exceeds that of temporary change. We also show that, in principle, there can be an analog to the Keynesian multiplier in the neoclassical growth model. Finally, in an empirically plausible version of the model, we show that the interest rate impact of a persistent government consumption shock exceeds that of a temporary one. Our results provide counterexamples to existing claims in the literature.
In the 1930s, Dunlop and Tarshis observed that the correlation between hours worked and the return to working is close to zero. This observation has become a litmus test by which macroeconomic models are judged. Existing real business cycle models fail this test dramatically. Based on this result, we argue that technology shocks cannot be the sole impulse driving post-war U.S. business cycles. We modify prototypical real business cycle models by allowing government consumption shocks to influence labor market dynamics in a way suggested by Aschauer (1985), Baro (1981, 1987), and Kormendi (1983). This modification can, in principle, bring the models into closer conformity with the data. Our results indicate that when aggregate demand shocks arising from stochastic movements in government consumption are incorporated into the analysis, and an empirically plausible degree of measurement error is allowed for, the model’s empirical performance is substantially improved.
Measured aggregate U.S. consumption does not behave like a martingale. This paper develops and tests two variants of the permanent income model that are consistent with this fact. In both variants, we assume agents make decisions on a continuous time basis. According to the first variant, the martingale hypothesis holds in continuous time and serial persistence in measured consumption reflects only the effects of time aggregation. We investigate this variant using both structural and atheoretical econometric models. The evidence against these models is far from overwhelming. This suggests that the martingale hypothesis may yet be a useful way to conceptualize the relationship between aggregate quarterly U.S. consumption and income. According to the second variant of the permanent income model, serial persistence in measured consumption reflects the effects of exogenous technology shocks and time aggression. In this model, continuous time consumption does not behave like a martingale. We find little evidence against this variance of the permanent income model. It is difficult, on the basis of aggregate quarterly U.S. data, to convincingly distinguish between the different continuous time models considered in the paper.
This paper describes and evaluates P-Star (P*), a new method to forecast inflation trends which was introduced by the Federal Reserve Board of Governors in the summer of 1989. The paper examines how well P* would have done, compared with eight other forecasting methods, had all of these methods been used to forecast inflation in the 1970s and 1980s. P* turns out to be not an exceptionally good or bad way to forecast inflation.
This paper evaluates Hayashi’s conjecture that Japan’s postwar saving experience can be accounted for by the neoclassical model of economic growth as that country’s efforts to reconstruct its capital stock that was severely damaged in World War II. I call this the reconstruction hypothesis. I take a simplified version of a standard neoclassical growth model that is in widespread use in macroeconomics and simulate its response to capital destruction. The saving rate path implied by the model differs significantly from the path taken by actual Japanese postwar saving data. I discuss several model modifications which would reconcile the reconstruction hypothesis with Japan’s postwar saving experience. For the reconstruction hypothesis to be credible requires independent evidence on the empirical plausibility of the model modifications. It is left to future research to determine whether that evidence exists.
This paper studies the accuracy of two versions of the procedure proposed by Kydland and Prescott (1980, 1982) for approximating the optional decision rules in problems in which the objective fails to be quadratic and the constraints linear. The analysis is carried out in the context of a particular example: a version of the Brock-Mirman (1972) model of optimal economic growth. Although the model is not linear quadratic, its solution can nevertheless be computed with arbitrary accuracy using a variant of the value function iteration procedures described in Bertsekas (1976). I find that the Kydland-Prescott approximate decision rules are very similar to those implied by value function iteration.
The motive to hold inventories purely in the hope of profiting from a price increase is called the speculative motive. This motive has received considerable attention in the literature. However, existing studies do not have a clear implication for how large it is quantitatively. This paper incorporates the speculative motive for holding inventories into an otherwise standard real business cycle model and finds that empirically plausible parameterizations of the model result in an average inventory stock to output ratio that is virtually zero. For this reason, we conclude that the quantitative magnitude of the speculative role for holding inventories in this model is quite small. This suggests the possibility that the study of aggregate economic phenomena can safely abstract from inventory speculation.
This paper examines the quantitative importance of temporal aggregation bias in distorting parameter estimates and hypothesis tests. Our strategy is to consider two empirical examples in which temporal aggregation bias has the potential to account for results which are widely viewed as being anomalous from the perspective of particular economic models. Our first example investigates the possibility that temporal aggregation bias can lead to spurious Granger causality relationships. The quantitative importance of this possibility is examined in the context of Granger causal relations between the growth rates of money and various measures of aggregate output. Our second example investigates the possibility that temporal aggregation bias can account for the slow speeds of adjustment typically obtained with stock adjustment models. The quantitative importance of this possibility is examined in the context of a particular class of continuous and discrete time equilibrium models of inventories and sales. The different models are compared on the basis of the behavioral implications of the estimated values of the structural parameters which we obtain and their overall statistical performance. The empirical results from both examples provide support for the view that temporal aggregation bias can be quantitatively important in the sense of significantly distorting inference.
This paper presents a completely worked example applying the frequency domain estimation strategy proposed by Hansen and Sargent [1980, 1981a]. A bivariate, high order continuous time autoregressive moving average model is estimated subject to the restrictions implied by the rational expectations model of the term structure of interest rates. The estimation strategy takes into account the fact that one of the data series are point-in-time observations, while the other are time averaged. Alternative strategies are considered for taking into account nonstationarity in the data. Computing times reported in the paper demonstrate that estimation using the techniques of Hansen and Sargent is inexpensive.
This paper investigates two methods of approximating the optimal decision rules of a stochastic, representative agent model which exhibits growth in steady state and cannot be expressed in linear–quadratic form. Both methods are modifications on the linear quadratic approximation technique proposed by Kydland and Prescott. It is shown that one of the solution methods leads to bizarre dynamic behavior, even with shocks of empirically reasonable magnitude. The other solution technique does not exhibit such bizarre behavior.
A bivariate Granger-causality test on money and output finds statistically significant causality when data are measured in log levels, but not when they are measured in first differences of the logs. Which of these results is right? The answer to that question matters because a finding of no Granger-causality from money to output would substantially embarrass existing business cycle models in which money plays an important role [Eichenbaum and Singleton (1986)]. Monte Carlo simulation experiments indicate that, most probably, the first difference results reflect lack of power, whereas the level results reflect Granger-causality that is actually in the data.
Deaton (1986) has noted that if income is a first-order autoregressive process in first differences, then a simple version of Friedman’s permanent income hypothesis (SPIH) implies that measured U.S. consumption is insufficiently sensitive to innovations in income. This paper argues that this implication of the SPIH is a consequence of the fact that it ignores the role of the substitution effect in the consumption decision. Using a parametric version of the standard model of economic growth, the paper shows that very small movements in interest rates are sufficient to induce an empirically plausible amount of consumption smoothing. Since an overall evaluation of the model’s explanation for the observed smoothness of consumption requires examining its implications for other aspects of the data, the paper also explores some of these.
This paper investigates—in the context of a simple example—the accuracy of an econometric technique recently proposed by Kydland and Prescott. We consider a hypothetical econometrician who has a large sample of data, which is known to be generated as a solution to an infinite horizon, stochastic optimization problem. The form of the optimization problem is known to the econometrician. However, the values of some of the parameters need to be estimated. The optimization problem—presented in a recent paper by Long and Plosser—is not linear quadratic. Nevertheless, its closed form solution is known, although not to the hypothetical econometrician of this paper. The econometrician uses Kydland and Prescott’s method to estimate the unknown structural parameters. Kydland and Prescott’s approach involves replacing the given stochastic optimization problem by another which approximates it. The approximate problem is a element of the class of linear quadratic problems, whose solution is well-known—even to the hypothetical econometrician of this paper. After examining the probability limits of the econometrician’s estimators under “reasonable” specifications of model parameters, we conclude that the Kydland and Prescott method works well in the example considered. It is left to future research to determine the extent to which the results obtained for the example in this paper applies to a broader class of models.
This paper describes and implements a procedure for estimating the timing interval in any linear econometric model. The procedure is applied to Taylor’s model of staggered contracts using annual averaged price and output data. The fit of the version of Taylor’s model with serially uncorrelated disturbances improves as the timing interval of the model is reduced.
This paper shows how to derive the family of models in which Cagan’s model of hyperinflation is a rational expectations model. The slope parameter in Cagan’s portfolio balance equation is identified in some of these models and in others it is not—a fact which clarifies results obtained in several recent papers.
Theory typically does not give us reason to believe that economic models ought to be formulated at the same level of time aggregation at which data happen to be available. Nevertheless, this is frequently done when formulating econometric models, with potentially important specification-error implications. This suggests examining the alternatives, one of which is to model in continuous time. The primary difficulty in inferring the parameters of a continuous time model given sampled observations is the “aliasing identification problem.” This paper shows how the restrictions implied by rational expectations sometimes do, and sometimes do not, resolve the problem. This is accomplished very simply in the context of a hypothesis about the term structure of interest rates. The paper confirms and extends results obtained for another example by Hansen and Sargent.