We use cookies on our website to give you the best online experience. Please know that if you continue to browse on our site, you agree to this use. You can always block or disable cookies using your browser settings. To find out more, please review our privacy policy.

About


Policy Hub: Macroblog provides concise commentary and analysis on economic topics including monetary policy, macroeconomic developments, inflation, labor economics, and financial issues for a broad audience.

Authors for Policy Hub: Macroblog are Dave Altig, John Robertson, and other Atlanta Fed economists and researchers.

Comment Standards:
Comments are moderated and will not appear until the moderator has approved them.

Please submit appropriate comments. Inappropriate comments include content that is abusive, harassing, or threatening; obscene, vulgar, or profane; an attack of a personal nature; or overtly political.

In addition, no off-topic remarks or spam is permitted.

January 3, 2018

Is Macroprudential Supervision Ready for the Future?

Virtually everyone agrees that systemic financial crises are bad not only for the financial system but even more importantly for the real economy. Where the disagreements arise is how best to reduce the risk and costliness of future crises. One important area of disagreement is whether macroprudential supervision alone is sufficient to maintain financial stability or whether monetary policy should also play an important role.

In an earlier Notes from the Vault post, I discussed some of the reasons why many monetary policymakers would rather not take on the added responsibility. For example, policymakers would have to determine the appropriate measure of the risk of financial instability and how a change in monetary policy would affect that risk. However, I also noted that many of the same problems also plague the implementation of macroprudential policies.

Since that September 2014 post, additional work has been done on macroprudential supervision. Some of that work was the topic of a recent workshop, "Financial Regulation: Fit for the Future?," hosted by the Atlanta Fed and cosponsored by the Center for the Economic Analysis of Risk at Georgia State University. In particular, the workshop looked at three important issues related to macroprudential supervision: governance of macroprudential tools, measures of when to deploy macroprudential tools, and the effectiveness of macroprudential supervision. This macroblog post discusses some of the contributions of three presentations at the conference.

The question of how to determine when to deploy a macroprudential tool is the subject of a paper  by economists Scott Brave (from the Chicago Fed) and José A. Lopez (from the San Francisco Fed). The tool they consider is countercyclical capital buffers, which are supplements to normal capital requirements that are put into place during boom periods to dampen excessive credit growth and provide banks with larger buffers to absorb losses during a downturn.

Brave and Lopez start with existing financial conditions indices and use these to estimate the probability that the economy will transition from economic growth to falling gross domestic product (GDP) (and vice versa), using the indices to predict a transition from a recession to growth. Their model predicted a very high probability of transition to a path of falling GDP in the fourth quarter of 2007, a low probability of transitioning to a falling path in the fourth quarter of 2011, and a low but slightly higher probability in the fourth quarter of 2015.

Brave and Lopez then put these probabilities into a model of the costs and benefits associated with countercyclical capital buffers. Looking back at the fourth quarter of 2007, their results suggest that supervisors should immediately adopt an increase in capital requirements of 25 basis points. In contrast, in the fourth quarters of both 2011 and 2015, their results indicated that no immediate change was needed but that an increase in capital requirements of 25 basis points might be need to be adopted within the next six or seven quarters.

The related question—who should determine when to deploy countercyclical capital buffers—was the subject of a paper  by Nellie Liang, an economist at the Brookings Institution and former head of the Federal Reserve Board's Division of Financial Stability, and Federal Reserve Board economist Rochelle M. Edge. They find that most countries have a financial stability committee, which has an average of four or more members and is primarily responsible for developing macroprudential policies. Moreover, these committees rarely have the ability to adopt countercyclical macroprudential policies on their own. Indeed, in most cases, all the financial stability committee can do is recommend policies. The committee cannot even compel the competent regulatory authority in its country to either take action or explain why it chose not to act.

Implicit in the two aforementioned papers is the belief that countercyclical macroprudential tools will effectively reduce risks. Federal Reserve Board economist Matteo Crosignani presented a paper  he coauthored looking at the recent effectiveness of two such tools in Ireland.

In February 2015, the Irish government watched as housing prices climbed from their postcrisis lows at a potentially unsafe rate. In an attempt to limit the flow of funds into risky mortgage loans, the government imposed limits on the maximum permissible loan-to-value (LTV) ratio and loan-to-income ratio (LTI) for new mortgages. These regulations became effective immediately upon their announcement and prevented the Irish banks from making loans that violated either the LTV or LTI requirements.

Crosignani and his coauthors were able to measure a large decline in loans that did not conform to the new requirements. However, they also find that a sharp increase in mortgage loans that conformed to the requirements largely offset this drop. Additionally, Crosignani and his coauthors find that the banks that were most exposed to the LTV and LTI requirements sought to recoup the lost income by making riskier commercial loans and buying greater quantities of risky securities. Their findings suggest that the regulations may have stopped higher-risk mortgage lending but that other changes in their portfolio at least partially undid the effect on banks' risk exposure.

August 11, 2016

Forecasting Loan Losses for Stress Tests

Bank capital requirements are back in the news with the recent announcements of the results of U.S. stress tests by the Federal Reserve and the European Union (E.U.) stress tests by the European Banking Authority (EBA). The Federal Reserve found that all 33 of the bank holding companies participating in its test would have continued to meet the applicable capital requirements. The EBA found progress among the 51 banks in its test, but it did not define a pass/fail threshold. In summarizing the results, EBA Chairman Andrea Enria is widely quoted as saying, "Whilst we recognise the extensive capital raising done so far, this is not a clean bill of health," and that there remains work to do.

The results of the stress tests do not mean that banks could survive any possible future macroeconomic shock. That standard would be an extraordinarily high one and would require each bank to hold capital equal to its total assets (or maybe even more if the bank held derivatives). However, the U.S. approach to scenario design is intended to make sure that the "severely adverse" scenario is indeed a very bad recession.

The Federal Reserve's Policy Statement on the Scenario Design Framework for Stress Testing indicates that the severely adverse scenario will have an unemployment increase of between 3 and 5 percentage points or a level of 10 percent overall. That statement observes that during the last half century, the United States has seen four severe recessions with that large of an increase in the unemployment rate, with the rate peaking at more than 10 percent in last three severe recessions.

To forecast the losses from such a severe recession, the banks need to estimate loss models for each of their portfolios. In these models, the bank estimates the expected loss associated with a portfolio of loans as a function of the variables in the scenario. In estimating these models, banks often have a very large number of loans with which to estimate losses in their various portfolios, especially the consumer and small business portfolios. However, they have very few opportunities to observe how the loans perform in a downturn. Indeed, in almost all cases, banks started keeping detailed loan loss data only in the late 1990s and, in many cases, later than that. Thus, for many types of loans, banks might have at best data for only the relatively mild recession of 2001–02 and the severe recession of 2007–09.

Perhaps the small number of recessions—especially severe recessions—would not be a big problem if recessions differed only in their depth and not their breadth. However, even comparably severe recessions are likely to hit different parts of the economy with varying degrees of severity. As a result, a given loan portfolio may suffer only small losses in one recession but take very large losses in the next recession.

With the potential for models to underestimate losses given there are so few downturns to calibrate to, the stress testing process allows humans to make judgmental changes (or overlays) to model estimates when the model estimates seem implausible. However, the Federal Reserve requires that bank holding companies should have a "transparent, repeatable, well-supported process" for the use of such overlays.

My colleague Mark Jensen recently made some suggestions about how stress test modelers could reduce the uncertainty around projected losses because of limited data from directly comparable scenarios. He recommends using estimation procedures based on a probability theorem attributed to Reverend Thomas Bayes. When applied to stress testing, Bayes' theorem describes how to incorporate additional empirical information into an initial understanding of how losses are distributed in order to update and refine loss predictions.

One of the benefits of using techniques based on this theorem is that it allows the incorporation of any relevant data into the forecasted losses. He gives the example of using foreign data to help model the distribution of losses U.S. banks would incur if U.S. interest rates become negative. We have no experience with negative interest rates, but Sweden has recently been accumulating experience that could help in predicting such losses in the United States. Jensen argues that Bayesian techniques allow banks and bank supervisors to better account for the uncertainty around their loss forecasts in extreme scenarios.

Additionally, I have previously argued that the existing capital standards provide further way of mitigating the weaknesses in the stress tests. The large banks that participate in the stress tests are also in the process of becoming subject to a risk-based capital requirement commonly called Basel III that was approved by an international committee of banking supervisors after the financial crisis. Basel III uses a different methodology to estimate losses in a severe event, one where the historical losses in a loan portfolio provide the parameters to a loss distribution. While Basel III faces the same problem of limited loan loss data—so it almost surely underestimates some risks—those errors are likely to be somewhat different from those produced by the stress tests. Hence, the use of both measures is likely to somewhat reduce the possibility that supervisors end up requiring too little capital for some types of loans.

Both the stress tests and risk-based models of the Basel III type face the unavoidable problem of inaccurately measuring risk because we have limited data from extreme events. The use of improved estimation techniques and multiple ways of measuring risk may help mitigate this problem. But the only way to solve the problem of limited data is to have a greater number of extreme stress events. Given that alternative, I am happy to live with imperfect measures of bank risk.

Author's note: I want to thank the Atlanta Fed's Dave Altig and Mark Jensen for helpful comments.


June 6, 2016

After the Conference, Another Look at Liquidity

When it comes to assessing the impact of central bank asset purchase programs (often called quantitative easing or QE), economists tend to focus their attention on the potential effects on the real economy and inflation. After all, the Federal Reserve's dual mandate for monetary policy is price stability and full employment. But there is another aspect of QE that may also be quite important in assessing its usefulness as a policy tool: the potential effect of asset purchases on financial markets through the collateral channel.

Asset purchase programs involve central bank purchases of large quantities of high-quality, highly liquid assets. Postcrisis, the Fed has purchased more than $3 trillion of U.S. Treasury securities and agency mortgage-backed securities, the European Central Bank (ECB) has purchased roughly 727 billion euros' worth of public-sector bonds (issued by central governments and agencies), and the Bank of Japan is maintaining an annual purchase target of 80 trillion yen. These bonds are not merely assets held by investors to realize a return; they are also securities highly valued for their use as collateral in financial transactions. The Atlanta Fed's 21st annual Financial Markets Conference explored the potential consequences of these asset purchase programs in the context of financial market liquidity.

The collateral channel effect focuses on the role that these low-risk securities play in the plumbing of U.S. financial markets. Financial firms fund a large fraction of their securities holdings in the repurchase (or repo) markets. Repurchase agreements are legally structured as the sale of a security with a promise to repurchase the security at a fixed price at a given point in the future. The economics of this transaction are essentially similar to those of a collateralized loan.

The sold and repurchased securities are often termed "pledged collateral." In these transactions, which are typically overnight, the lender will ordinarily lend cash equal to only a fraction of the securities value, with the remaining unfunded part called the "haircut." The size of the haircut is inversely related to the safety and liquidity of the security, with Treasury securities requiring the smallest haircuts. When the securities are repurchased the following day, the borrower will pay back the initial cash plus an additional amount known as the repo rate. The repo rate is essentially an overnight interest rate paid on a collateralized loan.

Central bank purchases of Treasury securities may have a multiplicative effect on the potential efficiency of the repo market because these securities are often used in a chain of transactions before reaching a final holder for the evening. Here's a great diagram presented by Phil Prince of Pine River Capital Management illustrating the role that bonds and U.S. Treasuries play in facilitating a variety of transactions. In this example, the UST (U.S. Treasury) securities are first used as collateral in an exchange between the UST securities lender and the globally systemically important financial institution (GSIFI bank/broker dealer), then between the GSIFI bank and the cash provider, a money market mutual fund (MMMF), corporation, or sovereign wealth fund (SWF). The reuse of the UST collateral reduces the funding cost of the GSIFI bank and, hence, the cost to the levered investor/hedge fund who is trying to exploit discrepancies in the pricing of a corporate bond and stock.

Just how important or large is this pool of reusable collateral? Manmohan Singh of the International Monetary Fund presented the following charts, depicting the pledged collateral at major U.S. and European financial institutions that can be reused in other transactions.

So how do central bank purchases of high-quality, liquid assets affect the repo market—and why should macroeconomists care? In his presentation, Marvin Goodfriend of Carnegie Mellon University concluded that central bank asset purchases, which he terms "pure monetary policy," lower short-term interest rates (especially bank-to-bank lending) but increase the cost of funding illiquid assets through the repo market. And Singh noted that repo rates are an important part of the constellation of short-term interest rates and directly link overnight markets with the longer-term collateral being pledged. Thus, the interaction between a central bank's interest-rate policy and its balance sheet policy is an important aspect of the transmission of monetary policy to longer-term interest rates and real economic activity.

Ulrich Bindseil, director of general market operations at the ECB, discussed a variety of ways in which central bank actions may affect, or be affected by, bond market liquidity. One way that central banks may mitigate any adverse impact on market liquidity is through their securities lending programs, according to Bindseil. Central banks use such programs to lend particular bonds back out to the market to "provide a secondary and temporary source of securities to the financing market...to promote smooth clearing of Treasury and Agency securities."

On June 2, for example, the New York Fed lent $17.8 billion of UST securities from the Fed's portfolio. These operations are structured as collateral swaps—dealers pledge other U.S. Treasury bonds as collateral with the Fed. During the financial crisis, the Federal Reserve used an expanded version of its securities lending program called the Term Securities Lending Facility to allow firms to replace lower-quality collateral that was difficult to use in repo transactions with Treasury securities.

Finally, the Fed currently releases some bonds to the market each day in return for cash, through its overnight reverse repo operations, a supplementary facility used to support control of the federal funds rate as the Federal Open Market Committee proceeds with normalization. However, this release has an important limitation: these operations are conducted in the triparty repo market, and the bonds released through these operations can be reused only within that market. In contrast, if the Fed were to sell its U.S. Treasuries, the securities could not only be used in the triparty repo market but also as collateral in other transactions including ones in the bilateral repo market (you can read more on these markets here). As long as central bank portfolios remain large and continue to grow as in Europe and Japan, policymakers are integrally linked to the financial plumbing at its most basic level.

To see a video of the full discussion of these issues as well as other conference presentations on bond market liquidity, market infrastructure, and the management of liquidity within financial institutions, please visit Getting a Grip on Liquidity: Markets, Institutions, and Central Banks. My colleague Larry Wall's conference takeaways on the elusive definition of liquidity, along with the impact of innovation and regulation on liquidity, are here.

December 4, 2013

Is (Risk) Sharing Always a Virtue?

The financial system cannot be made completely safe because it exists to allocate funds to inherently risky projects in the real economy. Thus, an important question for policymakers is how best to structure the financial system to absorb these losses while minimizing the risk that financial sector failures will impair the real economy.

Standard theories would predict that one good way of reducing financial sector risk is diversification. For example, the financial system could be structured to facilitate the development of large banks, a point often made by advocates for big banks such as Steve Bartlett. Another, not mutually exclusive, way of enhancing diversification is to create a system that shares risks across banks. An example is the Dodd-Frank Act mandate requiring formerly over-the-counter derivatives transactions to be centrally cleared.
 
However, do these conclusions based on individual bank stability necessarily imply that risk sharing will make the financial system safer? Is it even relevant to the principal risks facing the financial system? Some of the papers presented at the recent Atlanta Fed conference, "Indices of Riskiness: Management and Regulatory Implications," broadly addressed these questions and others. Other papers discuss the impact of bank distress on local economies, methods of predicting bank failure, and various aspects of incentive compensation paid to bankers (which I discuss in a recent Notes from the Vault).

The stability implications of greater risk sharing across banks are explored in "Systemic Risk and Stability in Financial Networks" by Daron Acemoglu, Asuman Ozdaglar, and Alireza Tahbaz-Salehi. They develop a theoretical model of risk sharing in networks of banks. The most relevant comparison they draw is between what they call a "complete financial network" (maximum possible diversification) and a "weakly connected" network in which there is substantial risk sharing between pairs of banks but very little risk sharing outside the individual pairs. Consistent with the standard view of diversification, the complete networks experience few, if any, failures when individual banks are subject to small shocks, but some pairs of banks do fail in the weakly connected networks. However, at some point the losses become so large that the complete network undergoes a phase transition, spreading the losses in a way that causes the failure of more banks than would have occurred with less risk sharing.

Extrapolating from this paper, one could imagine that risk sharing could induce a false sense of security that would ultimately make a financial system substantially less stable. At first a more interconnected system shrugs off smaller shocks with seemingly no adverse impact. This leads bankers and policymakers to believe that the system can handle even more risk because it has become more stable. However, at some point the increased risk taking leads to losses sufficiently large to trigger a phase transition, and the system proves to be even less stable than it was with weaker interconnections.

While interconnections between financial firms are a theoretically important determinant of contagion, how important are these connections in practice? "Financial Firm Bankruptcy and Contagion," by Jean Helwege and Gaiyan Zhang, analyzes the spillovers from distressed and failing financial firms from 1980 to 2010. Looking at the financial firms that failed, they find that counterparty risk exposure (the interconnections) tend to be small, with no single exposure above $2 billion and the average a mere $53.4 million. They note that these small exposures are consistent with regulations that limit banks' exposure to any single counterparty. They then look at information contagion, in which the disclosure of distress at one financial firm may signal adverse information about the quality of a rival's assets. They find that the effect of these signals is comparable to that found for direct credit exposure.

Helwege and Zhang's results suggest that we should be at least as concerned about separate banks' exposure to an adverse shock that hits all of their assets as we should be about losses that are shared through bank networks. One possible common shock is the likely increase in the level and slope of the term structure as the Federal Reserve begins tapering its asset purchases and starts a process ultimately leading to the normalization of short-term interest rate setting. Although historical data cannot directly address banks' current exposure to such shocks, such data can provide evidence on banks' past exposure. William B. English, Skander J. Van den Heuvel, and Egon Zakrajšek presented evidence on this exposure in the paper "Interest Rate Risk and Bank Equity Valuations." They find a significant decrease in bank stock prices in response to an unexpected increase in the level or slope of the term structure. The response to slope increases (likely the primary effect of tapering) is somewhat attenuated at banks with large maturity gaps. One explanation for this finding is that these banks may partially recover their current losses with gains they will accrue when booking new assets (funded by shorter-term liabilities).

Overall, the papers presented in this part of the conference suggest that more risk sharing among financial institutions is not necessarily always better. Even though it may provide the appearance of increased stability in response to small shocks, the system is becoming less robust to larger shocks. However, it also suggests that shared exposures to a common risk are likely to present at least as an important a threat to financial stability as interconnections among financial firms, especially as the term structure and the overall economy respond to the eventual return to normal monetary policy. Along these lines, I recently offered some thoughts on how to reduce the risk of large widespread losses due to exposures to a common (credit) risk factor.

Photo of Larry WallBy Larry Wall, director of the Atlanta Fed's Center for Financial Innovation and Stability

 

Note: The conference "Indices of Riskiness: Management and Regulatory Implications" was organized by Glenn Harrison (Georgia State University's Center for the Economic Analysis of Risk), Jean-Charles Rochet, (University of Zurich), Markus Sticker, Dirk Tasche (Bank of England, Prudential Regulatory Authority), and Larry Wall (the Atlanta Fed's Center for Financial Innovation and Stability).