V. V. Chari is currently the Paul W. Frenzel Land Grant Professor of Liberal Arts at the University of Minnesota. He has been a professor of economics at the University of Minnesota since 1994 and is a former chair of the Department of Economics there. Chari has been an advisor at the Federal Reserve Bank of Minneapolis since 1994, where previously he was also a senior research officer.
Chari received his Ph.D from Carnegie-Mellon University in 1980. His research focuses on banking, fiscal, and monetary policy and issues of economic development.
Chari serves in many capacities for various organizations, including serving on the board of editors of the Journal of Economic Theory, Macrodynamics, the Review of Economic Dynamics, Econometrica, and the Journal of Economic Literature. In 1998, he was elected a fellow of the Econometric Society.
The ﬁnancialization view is that increased trading in commodity futures markets is associated with increases in the growth rate and volatility of commodity spot prices. This view gained credence be-cause in the 2000s trading volume increased sharply and many commodity prices rose and became more volatile. Using a large panel dataset we constructed, which includes commodities with and with-out futures markets, we ﬁnd no empirical link between increased futures market trading and changes in price behavior. Our data sheds light on the economic role of futures markets. The conventional view is that futures markets provide one-way insurance by allowing outsiders, traders with no direct interest in a commodity, to insure insiders, traders with a direct interest. The data are not consistent with the conventional view and we argue that they point to an alternative mutual insurance view, in which all participants insure each other. We formalize this view in a model and show that it is consistent with key features of the data.
We elaborate on the business cycle accounting method proposed by Chari, Kehoe, and McGrattan (2007), clear up some misconceptions about the method, and then apply it to compare the Great Recession across OECD countries as well as to the recessions of the 1980s in these countries. We have four main findings. First, with the notable exception of the United States, Spain, Ireland, and Iceland, the Great Recession was driven primarily by the efficiency wedge. Second, in the Great Recession, the labor wedge plays a dominant role only in the United States, and the investment wedge plays a dominant role in Spain, Ireland, and Iceland. Third, in the recessions of the 1980s, the labor wedge played a dominant role only in France, the United Kingdom, Belgium, and New Zealand. Finally, overall in the Great Recession the efficiency wedge played a more important role and the investment wedge played a less important role than they did in the recessions of the 1980s.
“Financial repression”—policies that allow a government to place its debt with financial institutions at relatively low interest rates—has been used widely for centuries. This essay focuses on one important form of repression: requiring financial intermediaries to hold more government bonds than they would if policies didn’t require it. We argue that this policy should only be used when the government has an urgent need to issue debt and has difficulty issuing new debt because of potential lender doubts about the government’s ability to repay.
This research suggests that policies that allow financial institutions to hold only small amounts of their own country’s government bonds may not be desirable.
We argue that bailouts create tax distortions, subsidy distortions and debt-size externalities. We show that an orderly resolution provision as in the Dodd-Frank Act addresses the tax and subsidy distortions but not the debt-size externalities. A regulatory system that imposes limits on the debt-equity ratio of firms and imposes a Pigouvian tax on their size eliminates the distortions and completely corrects the externalities.
Analysts of optimal policy often advocate for redistributive policies within developed economies using a behind-the-veil-of-ignorance criterion. Such analyses almost invariably ignore the effects of these policies on the well-being of people in poor countries. We argue that this approach is fundamentally misguided because it violates the criterion itself.
Policymakers concerned about rapid swings in commodity prices seek economic guidance about causal factors and future trends, but standard models—based on Harold Hotelling’s classic 1931 theory—are unable to explain actual data on price variability for a wide range of commodities. In this paper, we review this “Hotelling puzzle” and suggest modifications to current theory that may improve explanations of commodity price changes and provide better policy advice.
In this paper, we argue that the anticipation of bailouts creates incentives for banks to herd in the sense of making similar investments. This herding behavior makes bailouts more likely and potential crises more severe. Analyses of bailouts and moral hazard problems that focus exclusively on bank size are therefore misguided in our view, and the policy conclusion that limits on bank size can effectively solve moral hazard problems is unwarranted.
We develop a model in which, in order to provide managerial incentives, it is optimal to have costly bankruptcy. If benevolent governments can commit to their policies, it is optimal not to interfere with private contracts. Such policies are time inconsistent in the sense that, without commitment, governments have incentives to bail out firms by buying up the debt of distressed firms and renegotiating their contracts with managers. From an ex ante perspective, however, such bailouts are costly because they worsen incentives and thereby reduce welfare. We show that regulation in the form of limits on the debt-to-value ratio of firms mitigates the time-inconsistency problem by eliminating the incentives of governments to undertake bailouts. In terms of the cyclical properties of regulation, we show that regulation should be tightest in aggregate states in which resources lost to bankruptcy in the equilibrium without a government are largest.
Banks are prone to panic-induced runs due to their traditional structure of short-term, unconditional liabilities and long-term, illiquid assets. To avoid systemic crises caused by such panics, governments tend to bail out failing banks. Traditional banking systems thus impose external costs. Three major theoretical benefits are often used to justify a banking system that relies on short-term debt despite these costs: (1) maturity transformation, (2) efficient monitoring of bank managers and (3) facilitation of financial transactions. In a previous paper, we argued that the first two justifications, while seemingly compelling, actually suggest financial arrangements very different from our current system. In this paper, we examine the third justification, that a banking system reliant on short-term debt is essential for the facilitation of transactions. We find, in fact, that this reliance is more costly than generally recognized and, moreover, that socially beneficial financial transactions can and should be provided at less cost and risk by both restricting and broadening the payments system. Transactions should be restricted to institutions that continuously mark to market the value of their assets and issue equity claims to owners. Such accounts should also be broadened to include financial vehicles that are readily available, thanks to advances in information and communication technologies, and possibly quite different from current banks.
Banks are vulnerable to self-fulfilling panics because their liabilities (such as demand deposits and certificates of deposit) are short term and unconditional, and their assets (such as mortgages and business loans) are long term and illiquid. To prevent wider financial fallout from such panics, governments have strong incentive to bail out bank debtholders. Paradoxically, expectations of such bailouts can lead financial systems to rely excessively—from a societal perspective—on short-term debt to fund long-term assets. Fragile banking systems thus impose external costs, and regulation may therefore be socially desirable.
In light of this fragility and cost, we examine two of the major theoretical benefits from the reliance of the banking system on short-term debt: (1) maturity transformation and (2) efficient monitoring of bank managers. We argue that while both justifications may be compelling, they point us to financial regulations very different from the ones currently in place. These theoretical justifications suggest that the assets funded by banks should not have close substitutes in publicly traded markets, as is currently the case.
During the recent financial crisis, the volume of new loan issuances dropped sharply in the secondary loan market. U.S. policymakers responded with a variety of proposals aimed at restoring normal market function, including purchase of assets at above-market prices and reducing the costs of holding loans to maturity.
We develop a model of the secondary loan market to analyze the effectiveness of these proposals. In this model, the market’s primary function is to allocate loans to originators or secondary owners that have a comparative advantage in managing them. Because loan originators are better informed than potential purchasers about their loan quality, the markets suffer from adverse selection.
The model finds that interaction of adverse selection and reputational incentives creates fragile economic outcomes. In particular, it generates sudden collapses in new issuance volume due to small changes in collateral value similar to the fluctuations and credit inefficiencies seen empirically during the financial crisis.
We use the model to analyze programs that were proposed and in some cases implemented by policymakers to address loan market dysfunction and find that they do little to resolve the market’s inherent adverse selection problem. We conclude that, unfortunately, these policies were (or would have been) most likely ineffective, and possibly even counterproductive.
Now that global financial markets are beginning to stabilize, the Federal Reserve is considering how best to reabsorb liquidity so as not to create inflation as the economy revives. Three broad strategies for managing monetary reserves in the United States include: (1) paying interest on excess reserves, (2) managing interest rates on short-term deposits, and (3) selling back financial assets such as mortgage-backed securities. From a theoretical standpoint, these strategies are identical; which approach is employed is not of fundamental macroeconomic importance. Nevertheless, this note argues that several potentially large dangers associated with the first two strategies have been overlooked, whereas a frequently cited weakness of asset sales has been exaggerated. The best course is a careful blend of all three approaches, with strong emphasis on a preannounced program of gradual sales of financial assets. Such a joint strategy is likely to have the highest probability of success in draining reserves, with minimal risk.
The Ramsey approach to policy analysis finds the best competitive equilibrium given a set of available instruments. This approach is silent about unique implementation, namely designing policies so that the associated competitive equilibrium is unique. This silence is particularly problematic in monetary policy environments where many ways of specifying policy lead to indeterminacy. We show that sophisticated policies which depend on the history of private actions and which can differ on and off the equilibrium path can uniquely implement any desired competitive equilibrium. A large literature has argued that monetary policy should adhere to the Taylor principle to eliminate indeterminacy. Our findings say that adherence to the Taylor principle on these grounds is unnecessary. Finally, we show that sophisticated policies are robust to imperfect information.
The United States is indisputably undergoing a financial crisis and is perhaps headed for a deep recession. Here we examine three claims about the way the financial crisis is affecting the economy as a whole and argue that all three claims are myths. We also present three underappreciated facts about how the financial system intermediates funds between households and corporate businesses. Conventional analyses of the financial crisis focus on interest rate spreads. We argue that such analyses may lead to mistaken inferences about the real costs of borrowing and argue that, during financial crises, variations in the levels of nominal interest rates might lead to better inferences about variations in the real costs of borrowing. Moreover, we argue that even if current increase in spreads indicate increases in the riskiness of the underlying projects, by itself, this increase does not necessarily indicate the need for massive government intervention. We call for policymakers to articulate the precise nature of the market failure they see, to present hard evidence that differentiates their view of the data from other views which would not require such intervention, and to share with the public the logic and evidence that burnishes the case that the particular intervention they are advocating will fix this market failure.
Innovative activities have public good characteristics in the sense that the cost of producing the innovation is high compared to the cost of producing subsequent units. Moreover, knowledge of how to produce subsequent units is widely known once the innovation has occurred and is, therefore, non-rivalrous. The main question of this paper is whether mechanisms can be found which exploit market information to provide appropriate incentives for innovation. The ability of the mechanism designer to exploit such information depends crucially on the ability of the innovator to manipulate market signals. We show that if the innovator cannot manipulate market signals, then the efficient levels of innovation can be implemented without deadweight losses–for example, by using appropriately designed prizes. If the innovator can use bribes, buybacks, or other ways of manipulating market signals, patents are necessary.
Macroeconomists have largely converged on method, model design, reduced-form shocks, and principles of policy advice. Our main disagreements today are about implementing the methodology. Some think New Keynesian models are ready to be used for quarter-to-quarter quantitative policy advice; we do not. Focusing on the state-of-the-art version of these models, we argue that some of its shocks and other features are not structural or consistent with microeconomic evidence. Since an accurate structural model is essential to reliably evaluate the effects of policies, we conclude that New Keynesian models are not yet useful for policy analysis.
We analyze the setting of monetary and nonmonetary policies in monetary unions. We show that in these unions a time inconsistency problem in monetary policy leads to a novel type of free-rider problem in the setting of nonmonetary policies, such as labor market policy, fiscal policy, and bank regulation. The free-rider problem leads the union’s members to pursue lax nonmonetary policies that induce the monetary authority to generate high inflation. The free-rider problem can be mitigated by imposing constraints on the nonmonetary policies, like unionwide rules on labor market policy, debt constraints on members’ fiscal policy, and unionwide regulation of banks. When there is no time inconsistency problem, there is no free-rider problem, and constraints on nonmonetary policies are unnecessary and possibly harmful.
Robert Solow has criticized our 2006 Journal of Economic Perspectives essay describing “Modern Macroeconomics in Practice.” Solow eloquently voices the commonly heard complaint that too much macroeconomic work today starts with a model with a single type of agent. We argue that modern macroeconomics may not end too far from where Solow prefers. He is also critical of how modern macroeconomists use data to construct models. Specifically, he seems to think that calibration is the only way that our models encounter data. To the contrary, we argue that modern macroeconomics uses a wide variety of empirical methods and that this big-tent approach has served macroeconomics well. Solow also questions our claim that modern macroeconomics is firmly grounded in economic theory. We disagree and explain why.
The optimal choice of a monetary policy instrument depends on how tight and transparent the available instruments are and on whether policymakers can commit to future policies. Tightness is always desirable; transparency is only if policymakers cannot commit. Interest rates, which can be made endogenously tight, have a natural advantage over money growth and exchange rates, which cannot. As prices, interest and exchange rates are more transparent than money growth. All else equal, the best instrument is interest rates and the next-best, exchange rates. These findings are consistent with the observed instrument choices of developed and less-developed economies.
The central finding of the recent structural vector autoregression (SVAR) literature with a differenced specification of hours is that technology shocks lead to a fall in hours. Researchers have used this finding to argue that real business cycle models are unpromising. We subject this SVAR specification to a natural economic test and show that when applied to data from a multiple-shock business cycle model, the procedure incorrectly concludes that the model could not have generated the data as long as demand shocks play a nontrivial role. We also test another popular specification, which uses the level of hours, and show that with nontrivial demand shocks, it cannot distinguish between real business cycle models and sticky price models. The crux of the problem for both SVAR specifications is that available data require a VAR with a small number of lags and such a VAR is a poor approximation to the model’s VAR.
We make three comparisons relevant for the business cycle accounting approach. We show that in theory, representing the investment wedge as a tax on investment is equivalent to representing this wedge as a tax on capital income as long as the probability distributions over this wedge in the two representations are the same. In practice, convenience dictates that the underlying probability distributions over the investment wedge are different in the two representations. Even so, the quantitative results under the two representations are essentially identical. We also compare our methodology, the CKM methodology, to an alternative one used in Christiano and Davis (2006) and by us in early incarnations of the business cycle accounting approach. We argue that the CKM methodology rests on more secure theoretical foundations. Finally, we show that the results from the VAR-style decomposition of Christiano and Davis reinforce the results of the business cycle decomposition of CKM.
We propose a simple method to help researchers develop quantitative models of economic fluctuations. The method rests on the insight that many models are equivalent to a prototype growth model with time-varying wedges which resemble productivity, labor and investment taxes, and government consumption. Wedges corresponding to these variables—efficiency, labor, investment, and government consumption wedges—are measured and then fed back into the model in order to assess the fraction of various fluctuations they account for. Applying this method to U.S. data for the Great Depression and the 1982 recession reveals that the efficiency and labor wedges together account for essentially all of the fluctuations; the investment wedge plays a decidedly tertiary role, and the government consumption wedge, none. Analyses of the entire postwar period and alternative model specifications support these results. Models with frictions manifested primarily as investment wedges are thus not promising for the study of business cycles. (See Additional Material for a response to Christiano and Davis (2006).)
Theoretical advances in macroeconomics made in the last three decades have had a major influence on macroeconomic policy analysis. Moreover, over the last several decades, the United States and other countries have undertaken a variety of policy changes that are precisely what macroeconomic theory of the last 30 years suggests. The three key developments that have shaped macroeconomic policy analysis are the Lucas critique of policy evaluation due to Robert Lucas, the time inconsistency critique of discretionary policy due to Finn Kydland and Edward Prescott, and the development of quantitative dynamic stochastic general equilibrium models following Finn Kydland and Edward Prescott.
The main substantive finding of the recent structural vector autoregression literature with a differenced specification of hours (DSVAR) is that technology shocks lead to a fall in hours. Researchers have used these results to argue that business cycle models in which technology shocks lead to a rise in hours should be discarded. We evaluate the DSVAR approach by asking, is the specification derived from this approach misspecified when the data are generated by the very model the literature is trying to discard? We find that it is misspecified. Moreover, this misspecification is so great that it leads to mistaken inferences that are quantitatively large. We show that the other popular specification that uses the level of hours (LSVAR) is also misspecified. We argue that alternative state space approaches, including the business cycle accounting approach, are more fruitful techniques for guiding the development of business cycle theory.
In recent financial crises and in recent theoretical studies of them, abrupt declines in capital inflows, or sudden stops, have been linked with large drops in output. Do sudden stops cause output drops? No, according to a standard equilibrium model in which sudden stops are generated by an abrupt tightening of a country’s collateral constraint on foreign borrowing. In this model, in fact, sudden stops lead to output increases, not decreases. An examination of the quantitative effects of a well-known sudden stop, in Mexico in the mid-1990s, confirms that a drop in output accompanying a sudden stop cannot be accounted for by the sudden stop alone. To generate an output drop during a financial crisis, as other studies have done, the model must include other economic frictions which have negative effects on output large enough to overwhelm the positive effect of the sudden stop.
The desirability of fiscal constraints in monetary unions depends critically on whether the monetary authority can commit to follow its policies. If it can commit, then debt constraints can only impose costs. If it cannot commit, then fiscal policy has a free-rider problem, and debt constraints may be desirable. This type of free-rider problem is new and arises only because of a time inconsistency problem.
Why is inflation persistently high in some periods and low in others? The reason may be absence of commitment in monetary policy. In a standard model, absence of commitment leads to multiple equilibria, or expectation traps, even without trigger strategies. In these traps, expectations of high or low inflation lead the public to take defensive actions, which then make accommodating those expectations the optimal monetary policy. Under commitment, the equilibrium is unique and the inflation rate is low on average. This analysis suggests that institutions which promote commitment can prevent high inflation episodes from recurring.
This study analyzes two monetary economies, a cash-credit good model and a limited-participation model. In these models, monetary policy is made by a benevolent policymaker who cannot commit to future policies. The study defines and analyzes Markov equilibrium in these economies and shows that there is no time-inconsistency problem for a wide range of parameter values.
Financial crises are widely argued to be due to herd behavior. Yet recently developed models of herd behavior have been subjected to two critiques which seem to make them inapplicable to financial crises. Herds disappear from these models if two of their unappealing assumptions are modified: if their zero-one investment decisions are made continuous and if their investors are allowed to trade assets with market-determined prices. However, both critiques are overturned—herds reappear in these models—once another of their unappealing assumptions is modified: if, instead of moving in a prespecified order, investors can move whenever they choose.
Economists have offered many theories for the U.S. Great Depression, but no consensus has formed on the main forces behind it. Here we describe and demonstrate a simple methodology for determining which theories are the most promising. We show that a large class of models, including models with various frictions, are equivalent to a prototype growth model with time-varying efficiency, labor, and investment wedges that, at least on face value, look like time-varying productivity, labor taxes, and investment taxes. We use U.S. data to measure these wedges, feed them back into the prototype growth model, and assess the fraction of the fluctuations in 1929–39 that they account for. We find that the efficiency and labor wedges account for essentially all of the decline and subsequent recovery. Investment wedges play, at best, a minor role.
Recent empirical work on financial crises documents that crises tend to occur when macroeconomic fundamentals are weak, but that even after conditioning on an exhaustive list of fundamentals, a sizable random component to crises and associated capital flows remains. We develop a model of herd behavior consistent with these observations. Informational frictions together with standard debt default problems lead to volatile capital flows resembling hot money and financial crises. We show that repaying debt during difficult times identifies a government as financially resilient, enhances its reputation and stabilizes capital flows. Bailing out governments deprives resilient countries of this opportunity.
The central puzzle in international business cycles is that fluctuations in real exchange rates are volatile and persistent. We quantity the popular story for real exchange rate fluctuations: they are generated by monetary shocks interacting with sticky goods prices. If prices are held fixed for at least one year, risk aversion is high, and preferences are separable in leisure, then real exchanage rates generated by the model are as volatile as in the data and quite persistent, but less so than in the data. The main discrepancy between the model and the data, the consumption—real exchange rate anomaly, is that the model generates a high correlation between real exchange rates and the ratio of consumption across countries, while the data show no clear pattern between these variables.
Herd behavior is argued by many to be present in many markets. Existing models of such behavior have been subjected to two apparently devastating critiques. The continuous investment critique is that in the basic model herds disappear if simple zero-one investment decisions are replaced by the more appealing assumption that investment decisions are continuous. The price critique is that herds disappear if, as seems natural, other investors can observe asset market prices. We argue that neither critique is devastating. We show that once we replace the unappealing exogenous timing assumption of the early models that investors move in a pre-specified order by a more appealing endogenous timing assumption that investors can move whenever they choose then herds reappear.
Under a narrow set of assumptions, Chamley (1986) established that the optimal tax rate on capital income is eventually zero. This study examines and extends that result by relaxing Chamley’s assumptions, one by one, to see if the result still holds. It does. This study unifies the work of other researchers, who have confirmed the result independently using different types of models and approaches. This study uses just one type of model (discrete time) and just one approach (primal). Chamley’s result holds when agents are heterogeneous rather than identical, the economy’s growth rate is endogenous rather than exogenous, the economy is open rather than closed, and agents live in overlapping generations rather than forever. (With this last assumption, the result holds under stricter conditions than with the others.)
In 1995, Robert E. Lucas, Jr., was awarded the Nobel Prize in Economic Sciences. This review places Lucas’ work in a historical context and evaluates the effect of this work on the economics profession. Lucas’ central contribution is that he developed and applied economic theory to answer substantive questions in macroeconomics. Economists today routinely analyze systems in which agents operate in complex probabilistic environments to understand interactions about which the great theorists of an earlier generation could only speculate. This sea change is due primarily to Lucas. This essay is reprinted from the Journal of Economic Perspectives (Winter 1998, vol. 12, no. 1, pp. 171–86) with the permission of the American Economic Association.
The conventional wisdom is that monetary shocks interact with sticky goods prices to generate the observed volatility and persistence in real exchange rates. We investigate this conventional wisdom in a quantitative model with sticky prices. We find that with preferences as in the real business cycle literature, irrespective of the length of price stickiness, the model necessarily produces only a fraction of the volatility in exchange rates seen in the data. With preferences which are separable in leisure, the model can produce the observed volatility in exchange rates. We also show that long stickiness is necessary to generate the observed persistence. In addition, we show that making asset markets incomplete does not measurably increase either the volatility or persistence of real exchange rates.
We show that the desirability of fiscal constraints in monetary unions depends critically on the extent of commitment of the monetary authority. If the monetary authority can commit to its policies, fiscal constraints can only impose costs. If the monetary authority cannot commit, there is a free-rider problem in fiscal policy, and fiscal constraints may be desirable.
We provide an introduction to optimal fiscal and monetary policy using the primal approach to optimal taxation. We use this approach to address how fiscal and monetary policy should be set over the long run and over the business cycle. We find four substantive lessons for policymaking: Capital income taxes should be high initially and then roughly zero; tax rates on labor and consumption should be roughly constant; state-contingent taxes on assets should be used to provide insurance against adverse shocks; and monetary policy should be conducted so as to keep nominal interest rates close to zero. We begin optimal taxation in a static context. We then develop a general framework to analyze optimal fiscal policy. Finally, we analyze optimal monetary policy in three commonly used models of money: a cash-credit economy, a money-in-the-utility-function economy, and a shopping-time economy.
We construct a quantitative equilibrium model with price setting and use it to ask whether with staggered price setting monetary shocks can generate business cycle fluctuations. These fluctuations include persistent output fluctuations along with the other defining features of business cycles, like volatile investment and smooth consumption. We assume that prices are exogenously sticky for a short period of time. Persistent output fluctuations require endogenous price stickiness in the sense that firms choose not to change prices very much when they can do so. We find that for a wide range of parameter values the amount of endogenous stickiness is small. As a result, we find that in a standard quantitative business cycle model staggered price setting, by itself, does not generate business cycle fluctuations.
In U.S. elections, voters often vote for candidates from different parties for president and Congress. Voters also express dissatisfaction with the performance of Congress as a whole and satisfaction with their own representative. We develop a model of split-ticket voting in which government spending is financed by uniform taxes but the benefits from this spending are concentrated. While the model generates split-ticket voting, overall spending is too high only if the president’s powers are limited. Overall spending is too high in a parliamentary system, and our model can be used as the basis of an argument for term limits.
We ask what fraction of the variation in incomes across countries can be accounted for by investment distortions. In our neoclassical growth model the relative price of investment to consumption is a good measure of the distortions. Using data on relative prices we estimate a stochastic process for distortions and compare the resulting variance of incomes in the model to that in the data. We find that the variation of incomes in the model is roughly 4/5 of the variability of incomes in the data. Our model does well in accounting for 6 key regularities on income and investment in the data.
The paper itself is followed by three appendices: Appendix 1 describing the log-likelihood function, Appendix 2 describing the construction of labor share of income associated with the production of consumption and investment goods, and the Data Appendix.
This article investigates the relationship between inflation and output, in the data and in standard models. The article reports that empirical cross-country studies generally find a nonlinear, negative relationship between inflation and output, a relationship that standard models cannot come close to reproducing. The article demonstrates that the models’ problem may be due to their standard narrow assumption that all money is held by the public for making transactions. When the models are adjusted to also assume that banks are required to hold money, the models do a much better job. The article concludes that researchers interested in studying the effects of monetary policy on growth should shift their attention away from printing money and toward the study of banking and financial regulations.
In this paper we present a formal model of vote trading within a legislature. The model captures the conventional wisdom that if projects with concentrated benefits are financed by universal taxation, then majority rule leads to excessive spending. This occurs because the proponent of a particular bill only needs to acquire the votes of half the legislature and hence internalizes the costs to only half the representatives. We show that Pareto superior allocations are difficult to sustain because of a free rider problem among the representatives. We show that alternative voting rules, such as unanimity, eliminate excessive spending on concentrated benefit projects but lead to underfunding of global public goods.
We develop a model of a representative democracy in which a legislature makes collective decisions about local public goods expenditures and how they are financed. In our model of the political process legislators defer to spending requests of individual representatives, particularly committee chairmen, who tend to promote spending requests that benefit their own districts. Because legislators do not fully internalize the tax consequences of their individual spending proposals, there is a free rider problem, and as a result spending is excessively high. This leads legislators to prefer a higher level of debt to restrain excessive future spending.
This paper develops the quantitative implications of optimal fiscal policy in a business cycle model. In a stationary equilibrium the ex ante tax rate on capital income is approximately zero. There is an equivalence class of ex post capital income tax rates and bond policies that support a given allocation. Within this class the optimal ex post capital tax rates can range from being close to i.i.d. to being close to a random walk. The tax rate on labor income fluctuates very little and inherits the persistence properties of the exogenous shocks and thus there is no presumption that optimal labor tax rates follow a random walk. The welfare gains from smoothing labor tax rates and making ex ante capital income tax rates zero are small and most of the welfare gains come from an initial period of high taxation on capital income.
We find conditions for the Friedman rule to be optimal in three standard models of money. These conditions are homotheticity and separability assumptions on preferences similar to those in the public finance literature on optimal uniform commodity taxation. We show that there is no connection between our results and the result in the standard public finance literature that intermediate goods should not be taxed.
The U.S. Treasury could raise more revenue if it changed the way it auctions its debt. Under the current procedure, all bidders whose competitive bids for Treasury securities are accepted pay the prices they bid; different winning bidders, that is, pay different prices. Instead, economic theory says, all winning bidders should all pay the same price—that of the highest bid not accepted, or the price that just clears the market. This procedural change would increase the revenue that Treasury auctions raise primarily because it would decrease the amount of resources that bidders would spend collecting information about what other bidders are likely to do. It would also reduce the incentives for traders to attempt to manipulate the securities market.
This paper studies the quantitative properties of fiscal and monetary policy in business cycle models. In terms of fiscal policy, optimal labor tax rates are virtually constant and optimal capital income tax rates are close to zero on average. In terms of monetary policy, the Friedman rule is optimal—nominal interest rates are zero—and optimal monetary policy is activist in the sense that it responds to shocks to the economy.
We examine the validity of one version of the Coase Theorem: In any economy in which property rights are fully allocated, competition will lead to efficient allocations. This version of the theorem implies that the public goods problem can be solved by allocating property rights fully and letting markets do their work. We show that this mechanism is not likely to work well in economies with either pure public goods or global externalities. The reason is that the privatized economy turns out to be highly susceptible to strategic behavior in that the free-rider problem in public goods economies manifests itself as a complementary monopoly problem in the private goods economy. If the public goods or externalities are local in nature, however, market mechanisms are likely to work well.
Our work is related to the recent literature on the foundations of Walrasian equilibrium in that it highlights a relationship among the appropriateness of Walrasian equilibrium as a solution concept, the incentives for strategic play, the aggregate level of complementarities in the economy, and the problem of coordinating economic activity.
This paper uses a simple, graphical approach to analyze what happens to commodity prices and economic welfare when futures markets are introduced into an economy. It concludes that these markets do not necessarily make prices more or less stable. It also concludes that, contrary to common belief, whatever happens to commodity prices is not necessarily related to what happens to the economic welfare of market participants: even when futures markets reduce the volatility of prices, some people can be made worse off. These conclusions come from a series of models that differ in their assumptions about the primary function of futures markets, the structure of the industries involved, and the tastes and technologies of the market participants.
This paper presents a simple general equilibrium model of optimal taxation similar to that of Lucas and Stokey (1983), except that we let the government default on its debt. As a benchmark, we consider Ramsey equilibria in which the government can precommit its policies at the beginning of time. We then consider sustainable equilibria in which both government and private agent decision rules are required to be sequentially rational. We concentrate on trigger mechanisms which specify reversion to the finite horizon equilibrium after deviations by the government. The main result is that no Ramsey equilibrium with positive debt can be supported by such trigger mechanisms.
This paper presents a simple general equilibrium model of optimal taxation in which both private agents and the government can default on their debt. As a benchmark we consider Ramsey equilibria in which the government can precommit to its policies at the beginning of time, but in which private agents can default. We then consider sustainable equilibria in which both government and private agent decision rules are required to be sequentially rational. We completely characterize the set of sustainable equilibria. In particular, we show that when there is sufficiently little discounting and government consumption fluctuates enough, the Ramsey allocations and policies (in which the government never defaults) can be supported by a sustainable equilibrium.
This paper is a study of bank panics under the U.S. National Banking System in 1864–1913. During this period, bank deposits in the United States, like those in Great Britain and Canada, were not insured by the government. Unlike the United States, however, neither of those countries had any bank panics. The U.S. panics were caused essentially by two unique features of the U.S. banking system: prohibitions on bank branching and pyramiding of bank reserves. In the paper, a model which includes these features is constructed, and it is shown that bank panics can occur even though all agents are rational. In this model, bank panics can be eliminated by a combination of reserve requirements, central bank loans, and occasional restrictions on cash payments by banks. The conclusion is that to eliminate bank panics, deposit insurance is not necessary.
We propose a definition of time consistent policy for infinite horizon economies with competitive private agents. Allocations and policies are defined as functions of the history of past policies. A sustainable equilibrium is a sequence of history-contingent policies and allocations that satisfy certain sequential rationality conditions for the government and for private agents. We provide a complete characterization of the sustainable equilibrium outcomes for a variant of Fischer’s (1980) model of capital taxation. We also relate our work to recent developments in the theory of repeated games.
We examine the limiting behavior of cooperative and noncooperative fiscal policies as countries’ market power goes to zero. We show that these policies converge if countries raise revenues through lump-sum taxation. However, if there are unremovable domestic distortions, such as distorting taxes, there can be gains to coordination even when a single country’s policy cannot affect world prices. These results differ from the received wisdom in the optimal tariff literature. The key distinction is that, unlike in the tariff literature, the spending decisions of governments are explicitly modeled.
We propose a definition of involuntary unemployment which differs from that traditionally used in implicit labor contract theory. We say that a worker is involuntarily unemployed if the marginal wage implied by the optimal contract exceeds the marginal rate of substitution between leisure and consumption. We construct a model where risk-neutral firms have monopoly power and show that such monopoly power is necessary for involuntary unemployment to arise in the optimal contract. We numerically compute examples and show that such unemployment occurs for a wide range of parameter values.
This paper shows that bank runs can be modeled as an equilibrium phenomenon. We demonstrate that some aspects of the intuitive “story” that bank runs start with fears of insolvency of banks can be rigorously modeled. If individuals observe long “lines” at the bank, they correctly infer that there is a possibility that the bank is about to fail and precipitate a bank run. However, bank runs occur even when no one has any adverse information. Extra market constraints such as suspension of convertibility can prevent bank runs and result in superior allocations.
An examination of the behavior of stock returns around quarterly earnings announcement dates finds a seasonal pattern: small firms show large positive abnormal returns and a sizable increase in the variability of returns around these dates. Only part of the large abnormal returns can be accounted for by the fact that firms with good news tend to announce early. Large firms show no abnormal returns around announcement dates and a much smaller increase in variability.