Real Estate Research provides analysis of topical research and current issues in the fields of housing and real estate economics. Authors for the blog include the Atlanta Fed's Kristopher Gerardi, Carl Hudson, and analysts, as well as the Boston Fed's Christopher Foote and Paul Willen.
January 14, 2015
The Effectiveness of Restrictions of Mortgage Equity Withdrawal in Curtailing Default: The Case of Texas
As an economist who has studied the causes of the recent mortgage default and foreclosure crisis, I am often asked how to design policies that will minimize the likelihood of another crisis. My typical response to such a question is that one of the most effective ways of lowering mortgage defaults would be to limit borrower leverage by either increasing down payment requirements at the time of purchase or limiting home equity withdrawal subsequent to purchase.
The reason behind my belief is twofold. First, economic theory tells us that being in a situation of negative equity (where the remaining balance of the mortgage is greater than the market value of the property) is a necessary condition for default and foreclosure. Homeowners with positive equity will almost always have a financial incentive to sell their homes instead of suffering through the foreclosure process, while borrowers who are “under water” have a difficult time refinancing or selling (since they would need to have enough cash at closing to cover the difference between the outstanding balance of the mortgage and the sale price/appraisal of the house) and have less of a financial incentive to continue paying the mortgage. Second, numerous empirical studies in the literature have confirmed the theory by documenting a strong positive correlation between the extent of negative equity and the propensity to default on one’s mortgage.
New evidence on preventing defaults
An important new paper by Anil Kumar, an economist at the Federal Reserve Bank of Dallas, provides new evidence that shows just how effective restricting leverage can be in preventing mortgage defaults. His paper confirms many of the findings in previous studies that have shown a positive relationship between negative equity and default. However, it goes a step further by using plausibly random variation in home equity positions created by a government policy that placed explicit restrictions on home equity withdrawal.
Kumar's paper is a significant contribution to the literature because it seems to overcome a serious identification issue that has plagued most empirical studies on the topic. The major challenge is that a homeowner can partially control his or her equity position through decisions about initial down payments on purchase mortgages and decisions about cash-out refinancing and home equity loans or lines of credit subsequent to purchase. As a result, it's unclear whether homeowners with more negative equity are more likely to default because of their worse equity positions or because of other reasons (unobserved by the researcher) that happen to be correlated with the decision to put less money down at purchase or to extract more equity over time.
Both theory and empirical evidence tell us that more impatient individuals tend to borrow more and are more likely to default on their debts. Thus, it might simply be the case that more impatient borrowers who are less likely to repay any type of debt choose to put less money down and extract more equity over time, creating the observed correlation between negative equity and the propensity to default. To put it in the language of econometrics, there are both selection and treatment effects that could be driving the correlation that we see in the data, and the policy implications of restricting borrower leverage are likely very different depending on which cause is more important.
Do home equity restrictions cause lower default rate?
The paper focuses on a policy enacted in the state of Texas that placed severe restrictions on the extent of home equity withdrawal. The Texas constitution, enacted in 1876, actually prohibited home equity withdrawal. The prohibition was eventually lifted in 1997 and the restrictions were further relaxed in 2003, but even in the post-2003 period, Texas law placed serious limits on equity withdrawal, which remain in effect today.1 Subsequent to purchase, a borrower cannot take out more than 50 percent of the appraised value of the home, nor exceed 80 percent of total loan-to-value (LTV). For example, if a borrower owned a home worth $200,000 and had an outstanding mortgage balance of $140,000, the borrower would be allowed to take out only $20,000 in a cash-out refinance. It is important to note that this LTV restriction does not bind at the time of purchase, so a homebuyer in Texas could take out a zero-down-payment loan, and thus begin the homeownership tenure with an LTV ratio of 100 percent (we will come back to this issue later).
Here's a nice quote in the April 4, 2010, issue of the Washington Post crediting the cash-out restriction for Texas weathering the foreclosure crisis better than many areas of the country.
But there is a broader secret to Texas's success, and Washington reformers ought to be paying very close attention. If there's one thing that Congress can do to help protect borrowers from the worst lending excesses that fueled the mortgage and financial crises, it's to follow the Lone Star State's lead and put the brakes on "cash-out" refinancing and home-equity lending.
At first glance, the data suggest that such a sentiment may be correct. In the figure below, we display subprime mortgage serious delinquency rates (defined as loans that are at least 90 days delinquent) for Texas and its neighbors (Arkansas, Louisiana, New Mexico, and Oklahoma). We focus on the subprime segment of the market because these are the borrowers who are more likely to be credit-constrained and thus more likely to extract home equity at any given time. It is apparent from the figure that Texas had the lowest subprime mortgage delinquency rates over most of the sample period. While the paper uses a slightly different data set, a similar pattern holds (see Figure 1 in the paper). The figure is certainly compelling and suggests that the home equity withdrawal restrictions in Texas had an important effect on default behavior, but a simple comparison of aggregate default rates across states really doesn’t tell us whether the policy had a causal impact on behavior. There could be other differences between Texas and its neighboring states that are driving the differences in default rates. For example, house price volatility over the course of the boom and bust was significantly lower in Texas compared to the rest of the country, which could also explain the differences in default rates that we see in the figure.
The paper uses a relatively sophisticated econometric technique called "regression discontinuity" to try to isolate the causal impact of the Texas policy on mortgage default rates. We won't get into the gory details of the methodology in this post, so for anyone who wants more details, this paper provides a nice general overview of the technique. Essentially, the regression discontinuity approach implemented in the paper compares default rates over the 1999–2011 period in Texas counties and non-Texas counties close to the Texas borders with Louisiana, New Mexico, Arkansas, and Oklahoma while controlling for potential (nonlinear) trends in default rates that occur as a function of distance on each side of the Texas border. The paper also controls for other differences across counties that are likely correlated with mortgage default rates (such as average house price appreciation, average credit score, and more). The idea is to precisely identify a discontinuity in default rates at the Texas border caused by the restrictions on home equity withdrawal in Texas. This strikes us as a pretty convincing identification strategy, especially in light of the fact that information on actual home equity withdrawal is not available in the data set used in the paper.
The estimation results of the regression discontinuity specification show that the equity restriction policy in Texas lowered overall mortgage default rates over the 13-year period by 0.4 to 1.8 percentage points depending on assumptions about sample restrictions (including counties within 25, 50, 75, or 100 miles of the border) and functional form assumptions for the “control function” (that is, whether distance to the border is assumed to be a linear, quadratic, or cubic polynomial). At first glance, this may not seem like a large effect, but keep in mind that the average mortgage default rate over the entire sample period was only slightly above 3 percentage points in Texas and 4 percentage points in the neighboring states. The paper also restricts the sample to subprime mortgages only and finds significantly larger effects (2 to 4 percentage points), which makes sense. We expect subprime mortgage borrowers to be affected more by the equity restriction since they are more likely to withdraw home equity.2 The paper implements a battery of robustness checks to make sure that the results aren’t overly sensitive to functional form assumptions and adds controls for other types of state-level policy differences. Based on the results of those tests, the findings appear to be quite stable.
But is it a good policy?
So the paper appears to confirm what previous research on the relationship between equity and mortgage default has found, although it uses methods that aren’t quite as clean as the regression discontinuity approach employed in this analysis. However, it doesn’t mean that such a law change is necessarily good policy. While it seems to be effective in reducing defaults, it may also have some real costs. The most obvious one is the decrease in the volume of low-cost secured credit that many borrowers used to improve their circumstances during the housing boom. An unintended consequence of the policy might have been to push financially distressed households into higher-cost credit markets like credit cards or payday loans. A second drawback of the policy may have been that it increased homeowner leverage at the time of purchase. As there were no restrictions on LTV ratios at the time of purchase, many homebuyers may have decided to make lower down payments, knowing that their access to equity would be restricted in the future. It’s also possible that this may have resulted in a larger volume of subprime mortgage lending in Texas. Households with relatively high credit scores who could have obtained a prime mortgage with significant down payments (say, 20 percent), may have turned to the subprime segment of the market, where they could obtain loans with low down payments but with much more onerous contract terms.
While it’s not clear whether the actual Texas policy of restricting home equity extraction is welfare-improving, it might seem from the research that restricting borrower leverage is an effective way to reduce mortgage default rates. But limiting borrower leverage is very unpopular. In fact, it probably isn’t too much of an exaggeration to say that the vast majority of market participants are adamantly opposed to such policies. After all, it is perhaps the only policy upon which both the Center for Responsible Lending (CRL) and the Mortgage Bankers Association (MBA) share the same negative view.3 Thus, while such policies have been adopted in other countries, don’t expect to see them adopted in the United States any time soon.4 To the contrary, policy is more likely to go in the opposite direction as evidenced by the Federal Housing Finance Agency’s announcement to relax down payment requirements for Fannie Mae and Freddie Mac.
By Kris Gerardi, financial economist and associate policy adviser at the Federal Reserve Bank of Atlanta
1 Before 1998, both home equity lending (loans and lines of credit) and cash-out refinancing were explicitly prohibited in Texas. A 1997 constitutional amendment relaxed this ban by allowing for closed-end home equity loans and cash-out refinancing as long as the combined LTV ratio did not exceed 80 percent of the appraised value (among a few other limitations that are discussed in the paper). In 2003, another constitutional amendment passed that further allowed home equity lines of credit for up to 50 percent of the property’s appraised value, although still subject to a cap on the combined LTV ratio of 80 percent.
2 The effects are actually smaller for the subprime sample when compared to the average default rate over the entire sample period, since the average rate is significantly higher in the subprime segment of the market (10 percent subprime default rate compared to the 3 percent overall default rate in Texas).
July 01, 2013
Misrepresentation, or a Failure in Due Diligence? Another Argument
In the last post we wrote together, we discussed a paper on the role of misrepresentation in mortgage securitization by Tomasz Piskorski, Amit Seru, and James Witkin (2013, henceforth PSW).1 That paper argues that the people who created mortgage-backed securities (MBS) during the housing boom did not always tell the truth about the mortgages backing these bonds. Today, we discuss a second paper on misrepresentation, this one by John M. Griffin and Gonzalo Maturama (2013, henceforth GM).2 The two papers have a similar research approach, and the two sets of authors interpret their results in the same way—namely, in support of the hypothesis that misrepresentation was an important cause of the mortgage crisis. We offer an alternative interpretation.
We believe that the evidence shows that investors were not fooled and that deception had little or no effect on investor forecasts of defaults. Consequently, deception played little or no role in causing the crisis (see the post on PSW for details). We do think, however, that some results in the GM paper have significant implications for our understanding of the crisis, although GM does not focus on these particular results.
We argue that one can interpret their evidence on misreporting as a measure of due diligence on the part of lenders. Many—including most notably the New York Attorney General's office in a lawsuit against JP Morgan—allege that the dismal performance of securitized mortgages made after 2005 relative to those made before 2005 reflects a precipitous drop in due diligence among lenders starting in that year. But GM's paper implies that there was no such decline. In fact, for most measures of due diligence, there is almost no time series variation over the housing cycle at all.
Before we discuss the paper's implications for underwriting standards, it is important to outline GM's basic research approach with regards to misrepresentation. As with PSW, GM's fundamental idea is to compare two sets of loan-level mortgage records to see if the people marketing MBS misrepresented what they were selling. Specifically, GM compare information about mortgages supplied by MBS trustees with public records data from deed registries, as well as data on estimated house prices from an automated valuation model (AVM). PSW, by contrast, compare MBS trustees' data with information from a credit bureau. In general, GM's choice to use public records data as the comparison data set is probably more functional.
While PSW refer to their credit bureau data as "actual" data, it is well known that credit bureau data also contain errors, a fact that complicates any study of misrepresentation. For example, PSW often find that the credit bureau reports a second lien for a particular mortgage borrower while the MBS trustees report no such lien. The implication in such instances is that the MBS trustees misrepresented the loan. But PSW must also acknowledge that the reverse discrepancy turns out to be equally likely. Just as often, second liens appear in MBS data and not in the supposedly pristine data from the credit bureau. No data set is perfect, but GM's public records data is no doubt much cleaner than the credit bureau data. For a purchase mortgage, the records filed at a deed registry are not only important legal documents, they are also recorded on or very close to the day that the mortgage is originated. As a result, the public records data come closer to being "actual" data than data from a credit bureau.
GM measure four types of "misreporting" with their data: 1) unreported second liens; 2) investors incorrectly reported as owner-occupants; 3) unreported "flipping," in which the collateral had been sold previously; and 4) overvaluation of the property, which is defined to occur when the AVM reports a valuation that is more than 10 percent below the appraised house value appearing on the loan application. To us, neither 3 nor 4 seem like reasonable definitions of misreporting. For point 3, issuers never reported anything about whether the house was flipped. This issue turns to be a moot point, however, as Figure 1 from GM (reproduced below) shows that flipping almost never occurred. Regarding point 4, it's not surprising that AVMs often report substantially different numbers than flesh-and-blood appraisers do, for the same reason that two people guessing the number of jelly beans in a jar are likely to disagree. Estimating the right value exactly is not easy, even for people (and automated computer models) with the best of intentions.
More consequential are GM's findings relating to misrepresentations of the types identified in points 1 and 2. Here, GM's findings are essentially the same as PSW's, though GM report much higher rates of misrepresentation than do PSW. However, GM acknowledges that the difference stems almost entirely from their decision to ignore refinance loans. According to Table IA.VIII in GM's appendix, refinances have dramatically lower misrepresentation rates. But just as the central findings of GM are similar to those in PSW, so is our critique. The historical evidence indicates that investors were properly skeptical of the data provided by MBS issuers. Moreover, deception did not prevent investors from making accurate forecasts about default rates among securitized loans. We direct the reader to our post on PSW for more details.
Though we do not believe that GM can persuasively link misrepresentation of MBS data to massive investor losses, an alternative interpretation of their data has the potential to shed light on the mortgage crisis. One way to interpret the level of misreporting—in particular, for occupancy—is as a measure of due diligence on the part of lenders. Neither PSW nor GM suggest that for any particular loan, the MBS issuer knew that the borrower was an investor and did not plan to occupy the property. Instead, these authors claim that someone along the securitization chain failed to do the necessary due diligence to determine if the borrowers who claimed to be owner-occupiers were in fact investors. This due diligence was certainly possible. A sufficiently motivated loan officer could have done exactly what GM did: match loan files with public records to figure out that a potential borrower did not intend to live in the house he was buying.3 As a result, we would expect that when due diligence goes down, occupancy misreporting would go up.
Obtaining a proxy measure of due diligence is useful, because many commentators have argued that the poor performance of subprime loans made after 2005 as compared to loans made before 2005 (see Figure 3 from Foote, Gerardi, and Willen, 2012) resulted from a precipitous drop in due diligence. For example, in the recent complaint against JP Morgan, the New York Attorney General's office writes that:
[Subprime lenders], as early as February 2005, began to reduce the amount of due diligence conducted "in order to make us more competitive on bids with larger sub-prime sellers."
So what does GM's proxy measure of due diligence show? With respect to occupancy, there is little or no change in the incidence of occupancy misreporting in 2005. Indeed, looking across the entire sample, we see that occupancy misreporting rose smoothly from about 11 percent in 2002 to a peak of about 13 percent in 2006. In other words, at the peak of the boom, the incidence of sloppy underwriting was almost the same as it was four years earlier. In fact, all four series reported by GM show the same pattern or lack thereof. With the exception of the first quarter of 2006, second-lien misreporting was uniformly lower during what commentator Yves Smith refers to as the "toxic phase of subprime" lending than it was in 2004 and 2003 when loans performed dramatically better.
By Paul Willen, senior economist and policy adviser at the Federal Reserve Bank of Boston, with help from
Chris Foote, senior economist and policy adviser at the Federal Reserve Bank of Boston, and
Kris Gerardi, financial economist and associate policy adviser at the Federal Reserve Bank of Atlanta
1 Piskorski, Tomasz; Amit Seru; and James Witkin. "Asset Quality Misrepresentation by Financial Intermediaries: Evidence from RMBS Market" (February 12, 2013). Columbia Business School Research Paper No. 13-7. Available at SSRN: ssrn.com/abstract=2215422 or http://dx.doi.org/10.2139/ssrn.2215422
3 For example, the loan officer could use the public records to determine if a potential buyer owned multiple properties, or if the buyer recently put another property in a spouse's name.
April 26, 2012
Can home loan modification through the 60/40 Plan really save the housing sector?
padding-top: 5px !important; margin-bottom: 0px !important; padding-bottom: 2000px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
In a recent article in the Federal Reserve Bank of St. Louis Review, Manuel Santos, a professor at the University of Miami, claims to offer a simple solution to "save the housing sector." Called the "60/40 Plan," his proposal is the centerpiece of a business called 60/40 The Plan Inc. Santos’s article is, in our opinion, written less like an academic article and more like promotional material.
The developer of the 60/40 Plan, Gustavo Diaz, is seeking a patent for the proposal. Unfortunately for the stressed mortgage market, his idea is simply a specific variant of a long-standing mortgage-servicing practice known as "principal forbearance." In general, principal forbearance occurs when the mortgage lender grants a temporary reduction of a borrower’s monthly mortgage payment, often reducing the payment by a significant fraction, with the stipulation that the borrower repay this benefit, with interest, at a later date.
Principal forbearance is a loss-mitigation tool that mortgage lenders and servicers have been using for decades. In fact, Fannie Mae and Freddie Mac are currently using this technique as a loss mitigation tool and alternative to principal forgiveness (which Federal Housing Finance Agency Acting Director Edward DeMarco discussed here). Private mortgage lenders have also widely used principal forbearance, especially in the first few years of the recent foreclosure crisis.
As articulated in Diaz’s 60/40 Plan, principal forbearance simply splits a distressed borrower’s current principal balance into two parts: a 60 percent share that will fully amortize over 30 years and be subject to interest payments at market rates, and a 40 percent share that is treated as a zero-interest balloon loan due at the time of sale.
Of course, in practice, the optimal shares and other terms of a principal forbearance program should be, and often are in practice, based on a given household’s financial situation. One size does not fit all. Professor Santos advocates the 60/40 Plan in large part because it is, in the language of economists, "incentive compatible." What this means is that borrowers who need assistance with their mortgage payments will find the program helpful and borrowers who do not need assistance will not find the program very appealing and thus will have little incentive to pretend to be a borrower in need of help in order to qualify for the program.
He writes: "It is important to understand that the 60/40 Plan builds on financial postulates and incentive compatible mechanisms that can be firmly implemented. It is designed as a first-best contract between the homeowner and the lender by holding onto some basic principles of incentive theory."
We agree completely with this sentiment. In fact, one of us wrote an article almost five years ago that advocated a policy of principal forbearance over principal forgiveness for exactly these reasons. Thus, the 60/40 Plan is not a novel concept, as Professor Santos seems to believe. But even more problematic, principal forbearance, as we have come to realize over the past few years, is not a panacea for the housing market for several reasons. First, it is really only helpful and appealing to borrowers that have temporary cash-flow problems who do not wish to move. This is because under the 60/40 Plan and principal forbearance in general, a borrower remains in a position of negative equity, which makes it virtually impossible to sell, since the borrower would need to come up with the amount of negative equity in cash to repay the entire principal balance of the mortgage at closing. For example, in the numerical example that Professor Santos works through to illustrate how the 60/40 Plan would work in practice, the borrower remains in a position of negative equity for 15 years. Thus, if a cash-strapped borrower needs to move immediately, or even a few years down the road, default (or re-default) is very likely.
Second, carrying 40 percent of the mortgage at a zero (or below market) interest rate imposes significant costs on the lender or investor. (These costs are viewed as being offset by savings from avoiding foreclosure.) Nevertheless, principal forbearance is not always going to be a positive net-present-value proposition; this depends on the share being protected (40 percent is quite high), the amortization schedule (30 years is very long), the discount rate, and the re-default rate. Indeed, Professor Santos seemingly assumes no re-default despite the fact that under the plan a borrower would remain in negative equity for a very long time, as we discussed above.
Third, most distressed mortgages are not held by depository institutions as whole loans. Fannie Mae and Freddie Mac have been able to selectively employ principal forbearance because they make investors whole in terms of the original promised principal and interest payments. This is not true for private-label securitizations, and there have been ongoing disagreements between investors and servicers as to optimal loss-mitigation strategies. (And there is no reason to think this proposal would not be similarly controversial.) The 60/40 Plan also seemingly ignores the significant complications posed by existing second liens and mortgage insurance policies.
Finally, Professor Santos claims that the 40 percent zero-coupon balloon shares—typically nonrecourse loans to severely distressed homeowners—will have a deep secondary market to pull liquidity back into the housing market. This seems far-fetched given that these assets have little or no yield and will have high default rates with no recourse. However, reading further, it appears that the proposal assumes a Federal Deposit Insurance Corporation (FDIC) insurance wrap for these assets to facilitate their sale. The cost of this insurance would likely be expensive and require a controversial new program, with premiums expected to cover losses or a congressional appropriation. However, it also ignores the fact that FDIC-insured depository institutions only hold about 25 percent of all mortgages.
Principal forbearance can be a useful loss-mitigation tool, although its value depends on economic circumstances. The 60/40 Plan that Professor Santos advocates is an example of principal forbearance and not a novel concept. Moreover, the 60/40 Plan does not consider a number of important institutional factors that have hampered loss-mitigation activities since the onset of the mortgage foreclosure crisis. Simply put, the 60/40 Plan will not save the housing market.
By Scott Frame, financial economist and senior policy adviser, and
Kris Gerardi, financial economist and associate policy adviser at the Federal Reserve Bank of Atlanta
November 17, 2011
Taking on the conventional wisdom about fixed rate mortgages
The long-term fixed rate mortgage (FRM) is a central part of the mortgage landscape in America. According to recent data, the FRM accounts for 81 percent of all outstanding mortgages and 85 percent of new originations.1 Why is it so common? The conventional wisdom is that the FRM is a great product created during the Great Depression to bring some stability to the housing market. Homeowners were defaulting in record numbers, the story goes, because their adjustable rate mortgages (ARMs) adjusted upward and caused payment shocks they could not absorb.
In a Senate Committee on Banking, Housing, and Urban Affairs hearing on October 20, some experts presented testimony that followed this conventional wisdom. As John Fenton, president and CEO, Affinity Federal Credit Union, who testified on behalf of the National Association of Federal Credit Unions, laid out in his written testimony:
Prior to the introduction of the 30-year FRM, U.S. homeowners were at the mercy of adjustable interest rates. After making payments on a loan at a fluctuating rate for a certain period, the borrower would be liable for the repayment of the remainder of the loan (balloon payment). Before the innovation of the 30-year FRM, borrowers could also be subject to the "call in" of the loan, meaning the lender could demand an immediate payment of the full remainder. The 30-year FRM was an innovative measure for the banking industry, with lasting significance that enabled mass home ownership through its predictability.
Of course, this picture of the 30-year FRM as bringing stability to the housing market has profound implications for recent history. Many critics attribute the problems in the mortgage market that started in 2007 to the proliferation of ARMs. According to the narrative, lenders, after 70 years of stability and success with FRMs, started experimenting with ARMs again in the 2000s, exposing borrowers to payment shocks that inevitably led to defaults and the housing crisis. Indeed, one of the other panelists at the hearing, Janis Bowdler, senior policy analyst for the National Council of La Raza, argued in her written testimony that "when the toxic mortgages began to reset and brokers and lenders could no longer maintain their refinance schemes, a recession ushered in record-high foreclosure rates."
I argue, on the other hand—both in my testimony at the hearing and in this post—that the narrative of the fixed rate mortgage as an inherently safe product invented during the Depression that would have mitigated the subprime crisis because it
eliminated payment shocks does not fit the facts.
Parsing the myths around the fixed rate mortgage
First, the FRM has been around far longer than most people realize. Most people attribute the FRM's introduction to the Federal Housing Administration (FHA) in the 1930s.2 But it was the building and loan societies (B&Ls), later known as savings and loans, that created them, and they created them a full hundred years earlier. Starting with the very first B&L—the Oxford Provident Building Society in Frankfort, Pennsylvania, in 1831—the FRM accounted for almost every mortgage B&Ls originated. By the time of the Depression, B&Ls were not a niche player in the U.S. housing market. They were, rather, the largest single source of funding for residential mortgages, and the FRM was central to their business model.
As Table 2 of my testimony shows, B&Ls made about 40 percent of new residential mortgage originations in 1929 and 95 percent of those loans were long-term, fixed-rate, fully amortized mortgages. Importantly, B&Ls suffered mightily during the Depression, so the facts simply do not support the idea that the widespread use of FRMs would have prevented the housing crisis of the 1930s.
Source: Grebler, Blank and Winnick (1956)
Note: Market percentage is dollar-weighted. Building and loan societies were the main source of funds for residential mortgages and almost exclusively used long-term, fixed-rate, fully amortizing instruments.
To be sure, at 15–20 years, the terms on the FRMs the FHA insured were somewhat longer than those of pre-Depression FRMs, which typically had 10–15 year maturities.3 The 30-year FRM did not emerge into widespread use until later. It must be stressed that none of the arguments that Fenton made hinge on the length of the contract. Furthermore, the argument that Bowdler made in her testimony—that by delaying amortization, a 30-year maturity lowers the monthly payment as compared to a loan with shorter maturity—applies as much to ARMs as it does to FRMs.
But even though the ARMs may not have caused the Depression, FRM supporters might ask, didn't the payment shocks from the exotic ARMs cause the most recent crisis? Again, the data say no. Table 1 of my Senate testimony shows that payment shocks actually played little role in the crisis.
Of the large sample of borrowers who lost their homes, only 12 percent had a payment amount at the time they defaulted that exceeded the amount of the first scheduled monthly payment on the loan. The reason there were so few is that almost 60 percent of the borrowers who lost their homes had, in fact, FRMs. But even the defaulters who did have ARMs typically had either the same or a lower payment amount due to policy-related cuts in short-term interest rates.
To be absolutely clear here, my discussion so far focuses entirely on the question of whether the design of the FRM is inherently safe and eliminates a major cause of foreclosures. The data say it does not, but that does not necessarily mean that the FRM does not have benefits. As I discussed in my testimony, all else being equal, ARMs do default more than FRMs, but since defaults occur even when the payments stay the same or fall, the higher rate is most likely connected to the type of borrower who chooses an ARM, not to the design of the mortgage itself.
The difficulty of measuring the systemic value of fixed rate mortgages
One common response to my claim that the payment shocks from ARMs did not cause the crisis is that ARMs caused the bubble and thus indirectly caused the foreclosure crisis. However, it is important to understand that this argument, which suggests that the FRM has some systemic benefit, is fundamentally different from the argument that the FRM is inherently safe. This difference is as significant as that between arguing that airbags reduce fatalities by preventing traumatic injuries and arguing that they somehow prevent car accidents.
Measuring the systemic contribution of the FRM is exceedingly difficult because the use of different mortgage products is endogenous. Theory predicts that home buyers in places where house price appreciation is high would try to get the biggest mortgage possible, conditional on their income, something that an ARM typically facilitates. When the yield-curve has a positive slope (in most cases) and short-term interest rates are lower than long-term interest rates, ARMs loans offer lower initial payments compared to FRMs. Thus, it is very difficult to disentangle the causal effect of the housing boom on mortgage choice from the effect of mortgage choice on the housing boom.
In addition, there is evidence from overseas that suggests that the FRM is not essential for price stability. As Anthony B. Sanders, professor of finance at the George Mason School of Management, points out in his written testimony, FRMs are rare outside the United States. A theory of the stabilizing properties of FRMs would have to explain why Canadian borrowers emerged more or less unscathed from the global property bubble of the 2000s, despite almost exclusively using ARMs.
By Paul Willen, senior economist and policy adviser at the Boston Fed (with Boston Fed economist Christopher Foote and Atlanta Fed economist Kristopher Gerardi)
1 First liens in LPS data for May 2011.
October 04, 2011
The uncertain case against mortgage securitization
The opinions, analysis, and conclusions set forth are those of the authors and do not indicate concurrence by members of the Board of Governors of the Federal Reserve System or by other members of the staff.
Did mortgage securitization cause the mortgage crisis? One popular story goes like this: banks that originated mortgage loans and then sold them to securitizers didn't care whether the loans would be repaid. After all, since they sold the loans, they weren't on the hook for the defaults. Without any "skin in the game," those banks felt free to make worse and worse loans until...kaboom! The story is an appealing one and, since the beginning of the crisis, it has gained popularity among academics, journalists, and policymakers. It has even influenced financial reform. The only problem? The story might be wrong.
In this post we report on the latest round in an ongoing academic debate over this issue. We recently released two papers, available here and here, in which we argue that the evidence against securitization that many have found most damning has in fact been misinterpreted. Rather than being a settled issue, we believe securitization's role in the crisis remains an open and pressing question.
The question is an empirical one
Before we dive into the weeds, let us point out why the logic of the above story need not hold. The problem posed by securitization—that selling risk leads to excessive risk-taking—is not new. It is an example of the age-old incentive problem of moral hazard. Economists usually believe that moral hazard causes otherwise-profitable trade to not occur, or that it leads to the development of monitoring and incentive mechanisms to overcome the problem.
In the case of mortgage securitization, such mechanisms had been in place, and a high level of trade had been achieved, for a long time. Mortgage securitization was not invented in 2004. To the contrary, it has been a feature of the housing finance landscape for decades, without apparent incident. As far back as 1993, nearly two-thirds (65.3 percent) of mortgage volume was securitized, about the same fraction as was securitized in
2006 (67.6 percent) on the eve of the crisis. In order to address potential moral hazard, securitizers such as Fannie Mae and Freddie Mac (the government sponsored enterprises, or GSEs) long ago instituted regular audits, "putback" clauses forcing lenders to repurchase nonperforming or improperly originated loans, and other procedures designed to force banks to lend responsibly. Were such mechanisms successful? Perhaps, perhaps not. It is an empirical question, and so our understanding will rest heavily on the evidence.
The case against securitization
Benjamin Keys, Tanmoy Mukherjee, Amit Seru, and Vikrant Vig released an empirical paper in 2008 (revised in 2010) titled "Did Securitization Lead to Lax Screening? Evidence from Subprime Loans" (henceforth, KMSV) that pointed the finger squarely at securitization. The paper won several awards and, when it was published in the Quarterly Journal of Economics in 2010, it became that journal's most-cited paper that year by more than a factor of two. In other words, it was a very well-received and influential paper.
And for good reason. KMSV employs a clever method to try to answer the question of securitization's role in the crisis. Banks often rely on borrowers' credit (FICO) scores to make lending decisions, using particular score thresholds to make determinations. Below 620, for example, it is hard to get a loan. KMSV argues that securitizers also use FICO score thresholds when deciding which loans to buy from banks. Loan applicants just to the left of the threshold (FICO of 619) are very similar to those just to the right (FICO of 621), but they differ in the chance that their bank will be able to sell their loan to securitizers. Will the bank treat them differently as a result? This seems to have the makings of an excellent natural experiment.
Figures 1 and 2, taken from KMSV, illustrate the heart of their findings. Using a data set of only private-label securitized loans, the top panel plots the number of loans at each FICO score. There is a large jump at 620, which, KMSV argues, is evidence that it was easier to securitize loans above 620. The bottom panel shows default rates for each FICO score. Though we would expect default to smoothly decrease as FICO increases, there is a significant jump up in default at exactly the same 620 threshold. It appears that because securitization is easier to the right of the 620 cutoff, banks made worse loans. This seems prima facie evidence in favor of the theory that mortgage securitization led to moral hazard and bad loans.
Reexamining the evidence
But what is really going on here? In September 2009, the Boston Fed published a paper we wrote (original version here, updated version here) arguing for a very different interpretation of this evidence. In fact, we argue that this evidence actually supports the opposing hypothesis that securitizers were to some extent able to regulate originators' lending practices.
The data set used in KMSV only tells part of the story because it contains only privately securitized loans. We see a jump in the number of these loans at 620, but we know nothing about what is happening to the number of nonsecuritized loans at this cutoff. The relevant measure of ease of securitization is not the number of securitized loans, but the chance that a given loan is securitized—in other words, the securitization rate.
We used a different data set that includes both securitized and nonsecuritized loans, allowing us to calculate the securitization rate. Figures 3 and 4 come from the latest version of our paper.
Like KMSV, we find a clear jump up in the default rate at 620, as shown in the bottom panel. However, the chance a loan is securitized actually goes down slightly at 620, as shown in the top panel. How can this be? It turns out that above the 620 cutoff banks make more of all loans, securitized and nonsecuritized alike. This general increase in the lending rate drives the increase in the number of securitized loans that was found in KMSV, even though the securitization rate itself does not increase. With no increase in the probability of securitization, it is hard to argue that the jump in defaults at 620 is occurring because easier securitization motivates banks to lend more freely.
The real story behind the jumps in default
So why are banks changing their behavior around certain FICO cutoffs? To answer this question, we must go back to the mid-1990s and the introduction of FICO into mortgage underwriting. In 1995, Freddie Mac began to require mortgage lenders to use FICO scores and, in doing so, established a set of FICO tiers that persists to this day. Freddie directed lenders to give greater scrutiny to loan applicants with scores in the lower tiers. The threshold separating worse-quality applicants from better applicants was 620. The next threshold was 660. Fannie Mae followed suit with similar directives, and these rules of thumb quickly spread throughout the mortgage market, in part aided by their inclusion in automated underwriting software.
Importantly, the GSEs did not establish these FICO cutoffs as rules about what loans they would or would not securitize—they continued to securitize loans on either side of the thresholds, as before. These cutoffs were recommendations to lenders about how to improve underwriting quality by focusing their energy on vetting the most risky applicants, and they became de facto industry standards for underwriting all loans. Far from being evidence that securitization led to bad loans, the cutoffs are evidence of the success securitizers like Fannie and Freddie have had in directing lenders how to lend.
With this in mind, the data begin to make sense. Lenders, following the industry standard originally promulgated by the GSEs, take greater care extending credit to borrowers below 620 (and 660). They scrutinize applicants with scores below 620 more carefully and are less likely to approve them than applicants above 620, resulting in a jump-up in the total number of loans at the threshold. However, because of the greater scrutiny, the loans that are made below 620 are of higher average quality than the loans that are made above 620. This causes the jump-up in the default rate at the threshold.
Figures 5 and 6 show that this pattern also exists among loans that are kept in portfolio and never securitized. The change in lending standards causes these loans, as well as securitized loans, to jump in number and drop in quality at 620 (and 660). However, as figure 3 shows, the securitization rate doesn't change because securitized and nonsecuritized loans increase proportionately. The FICO cutoffs are used by lenders because they are general industry standards, not because the securitization rate changes. This means the cutoffs cannot provide evidence that securitization led to loose lending.
But the debate does not end there. In April 2010, Keys, Mukherjee, Seru, and Vig released a working paper (KMSV2), currently forthcoming in the Review of Financial Studies, that responded to the issues we raised. According to the paper, the mortgage market is segmented into two completely separate markets: 1) a "prime" market, in which only the GSEs buy loans, and 2) a "subprime" market, in which only private-label securitizers buy loans. KMSV2 argues that only private-label securitizers follow the 620 rule and, by pooling these two types of loans in our analysis, we obscured the jump in the securitization rate in that market.
The latest round in the debate
We went back to the drawing board to investigate these claims. We detail our findings in a new paper, available here. In the paper, we demonstrate that the pattern of jumps in default—without jumps in securitization—is not simply an artifact of pooling, but rather exists for many subsamples that do not pool GSE and private-label securitized loans. For example, we find the pattern among jumbo loans (by law an exclusively private-label market), among loans bought by the GSEs, and among loans originated in the period 2008–9 after the private-label market shut down. Furthermore, as figure 7 shows, the private-label market boomed in 2004 and disappeared around 2008, while the size of the jump in the number of loans at 620 continued to grow through 2010, demonstrating that use of the threshold was not tied to the private market.
What's more, KMSV's response fails to address the fundamental problem we identified with their research design: following the mandate of the GSEs, lenders independently use a 620 FICO rule of thumb in screening borrowers. Even if some subset of securitizers had used 620 as a securitization cutoff, one would not be able to tell what part of the jump in defaults is caused by an increase in securitization, and what part is simply due to the lender rule of thumb. Consequently, the jump in defaults at 620 cannot tell us whether securitization led to a moral hazard problem in screening.
To put this in more technical jargon, KMSV use the 620 cutoff as an instrument for securitization to investigate the effect of securitization on lender screening. But the guidance from the GSEs that caused lenders to adopt the 620 rule of thumb in the first place means that the exclusion restriction for the instrument is not satisfied—the 620 cutoff affects lender screening through a channel other than any change in securitization.
We also found that the GSE and private-label markets were not truly separate. In addition to qualitative sources describing them as actively competing for subprime loans, we find that 18 percent of the loans in our sample were at one time owned by a GSE and at another time owned by a private-label securitizer—a lower bound on the fraction of loans at risk of being sold to both. Because the markets were not separate, the data must be pooled.
Our findings, of course, do not settle the question of whether securitization caused the crisis. Rather, they show that the cutoff rule evidence does not resolve the question in the affirmative but instead points a bit in the opposite direction. Credit score cutoffs demonstrate that large securitizers like Fannie Mae and Freddie Mac were able to successfully impose their desired underwriting standards on banks. We hope our work causes researchers and policymakers to reevaluate their views on mortgage securitization and leads eventually to a conclusive answer.
By Ryan Bubb, assistant professor at the New York University School of Law, and Alex Kaufman, economist with the Board of Governors of the Federal Reserve System
April 18, 2011
What effect does negative equity have on mobility?
padding-top: 5px !important; margin-bottom: 0px !important; padding-bottom: 2000px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
A debate has broken out in the housing literature over the effect of negative equity on geographic mobility. The key question is whether homeowners with negative equity—those who are "under water"—are more or less likely to move relative to homeowners with positive equity. In a paper published in the Journal of Urban Economics last year (available on the New York Fed website), Fernando Ferreira, Joseph Gyourko, and Joseph Tracy (hereafter FGT leaving out these categories) argue that underwater owners are far less mobile. Using data from 1985 to 2005, they find that negative equity reduces the two-year mobility rate of the average American household by approximately 50 percent. This is a very large effect and, if true, FGT's findings have important policy implications for both the housing market and the labor market today. For example, the economist and Nobel laureate Joseph Stiglitz, in testimony to the Joint Economic Committee of Congress on December 10, 2009, stated:
But the weak housing market will contribute to high unemployment and lower productivity in another way: a distinguishing feature of America's labor market is its high mobility. But if individuals' mortgages are underwater or if home equity is significantly eroded, they will be unable to reinvest in a new home.
The fear is that if people with negative equity can't move to new jobs, then the job-matching efficiency of the U.S. labor market will suffer, putting upward pressure on the unemployment rate. This type of "house lock" is exactly what the economy doesn't need as it emerges from the recent housing crisis and recession.
However, recent research by Sam Schulhofer-Wohl, an economist from the Minneapolis Fed, casts doubt on FGT's conclusions, as well as the economic intuition in Stiglitz's testimony. Schulhofer-Whol replicated the FGT analysis using the same data set (the American Housing Survey, or AHS) over the same sample period. But he found the exact opposite result: negative equity significantly increases geographic mobility.
What is the source of the discrepancy?
The difference in results stems from what at first blush seems like a small discrepancy in how the two papers identify household moves in the AHS. Here are the details: the AHS is conducted every two years by the U.S. Census Bureau as a panel survey of homes. That means that AHS interviewers go to the same homes every two years to record who lives there (among other pieces of information). For a home that is owner-occupied in one survey year, there are four possibilities regarding its status two years later. First, the home could still be owner-occupied by the same household as before. Second, the home could be owner-occupied by a different household. Third, it could be occupied by a different household that rents the home but doesn't own it. Finally, the home could be vacant.
In their paper, FGT treated the first category as a non-move and the second category as a move. FGT threw out of their analysis any observations that fell into the third and fourth categories.1 Dropping these last two categories, rather than coding them as moves, introduces significant bias into FGT's results. As Schulhofer-Wohl notes, it effectively assumes that households in negative equity positions are no more likely to rent out their homes, or leave them vacant when they move, than are households with positive equity. But it is relatively straightforward to show that this assumption is not borne out in the data. Specifically, Schulhofer-Wohl finds that positive-equity households who move sell their houses to new owner-occupiers two-thirds of the time. The other two possibilities (renting out the home or leaving it vacant) combine to occur only one-third of the time. In contrast, among negative-equity households who move, sales to new owner-occupant households occur half of the time, with the other two possibilities occurring the other half. Thus, by dropping the last two categories of transitions, FGT are artificially increasing the mobility rate of positive equity households relative to negative equity households.
Schulhofer-Wohl recodes the moving variable so that instances in which an owner-occupied home is rented or vacated also count as moves. He then re-estimates FGT's regressions. The coding change reverses the estimated relationship between negative equity and mobility. The new estimates show that negative equity raises the probability of moving by 10 to 18 percent, relative to the overall probability of moving in the AHS data. This of course is in marked contrast to FGT's results, where negative equity was found to significantly decrease the probability of moving.
What does theory tell us?
When thinking about what economic theory might say about the relationship between negative equity and mobility, it is important to distinguish how equity might affect selling versus how equity might affect moving. FGT write that their results suggest a role for what behavioral economists call "loss aversion." In this context, loss aversion can occur when owners are reluctant to turn paper losses into real ones by selling a home that has fallen in price. But, as Schulhofer-Wohl's analysis makes clear, it is possible and even common for households to move to different houses without selling their old ones. That means that loss aversion potentially affects the probability of selling a home without affecting the probability of moving.
Of course, while moving and selling are theoretically distinct, they often occur together in practice. One reason for the tight relationship between moving and selling involves liquidity constraints. Even short-distance moves entail nontrivial transaction costs, so households that do not have liquid wealth may not be able to move without selling their home. As a result, to the extent that negative equity decreases the probability of selling (via loss aversion), it may also decrease the probability of moving.
Besides loss aversion, there are at least two other channels through which liquidity constraints are relevant for the way that negative equity affects homeowner mobility. By definition, underwater households cannot retire their mortgages by selling their houses. Liquidity-constrained households that are also under water do not have the cash to make up the difference between the outstanding mortgage balance and sale price. As a result, negative equity could reduce selling (and, by extension, moving). On the other hand, liquidity-constrained households are more likely to simply default on their mortgages. Thus, negative equity might increase the probability of moving, though the moves that it facilitates are accompanied by foreclosures and not sales. Note that this "default channel" between negative equity and mobility depends importantly on expectations of future housing prices. Negative-equity households who do not think housing prices will rise any time soon are more likely to default on their mortgages, and thus move, than households who think that higher prices and restored housing equity are just around the corner.
The offsetting implications of liquidity constraints on mobility mean that theory doesn't provide a clean prediction for how negative equity should affect mobility. The question boils down to which implication is dominant in the data. The findings from the Schulhofer-Wohl paper suggest that the default channel may be relatively large, so concern about negative equity impeding homeowner mobility may be overblown.
Are these studies relevant to the current environment?
The sample period for both papers we have discussed ended in 2005. While we certainly believe that the issue addressed by both papers is very important, and that the Schulhofer-Wohl analysis corrects an important omission in the FGT study, we would offer a cautionary note to those who would extrapolate the findings of these studies to the current environment. The period 1985–2005 was a boom time in housing markets for most areas of the country. One way to see this is by noting the low number of negative equity observations in both the FGT and Schulhofer-Wohl papers. The majority of negative equity observations in the AHS data is likely from only a couple of areas in the country and from a narrow time period (most likely from the East and West coasts in the late 1980s and early 1990s). These places and time periods may be unrepresentative of the average negative-equity owner today.
Even more importantly, there were very few foreclosures from 1985 to 2005 relative to the past several years. This paucity of foreclosures was probably due not only to the low number of negative-equity households, but also to the low probability of foreclosure conditional on having negative equity. Recall that if housing prices are generally rising, households with negative equity will try hard to hang on to their homes and reap the benefits of future price appreciation, even if they are liquidity-constrained. It's probably safe to say that price expectations are lower today than they were in 1985–2005. Because low price expectations increase defaults, and because defaults and foreclosures increase the mobility of negative-equity owners through the default channel, it might be the case that the current effect of negative-equity on mobility is not only positive, but also even larger than the positive estimates in Schulhofer-Wohl's paper.
Research economist and assistant policy adviser at the Federal Reserve Bank of Atlanta
1 This coding choice is not divulged in the FGT paper. The authors confirmed in private correspondence that it was a conscious decision to omit these categories and not a coding error, and that they are currently working on a revision of their original work that will address this issue.
March 09, 2011
The seductive but flawed logic of principal reduction
padding-top: 5px !important; margin-bottom: 0px !important; padding-bottom: 2000px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
margin-bottom: 0px !important; padding-bottom: 0px !important; }
The idea that a program to reduce principal balances on mortgage loans will cure the nation’s housing ills at little or no cost has been kicking around since the very early stages of the foreclosure crisis and refuses to die. If news stories are true, the administration, in conjunction with the state attorneys general, will soon announce that lenders have agreed to write down borrower principal balances by a grand total of $20–$25 billion as part of a deal to address serious procedural problems in foreclosure filings. Policy wonks and housing experts will greet this announcement with glee, saying that policymakers have ignored principal reduction for too long but have seen the light and are finally going to cure the epidemic of foreclosures that has gripped the country since 2007. Are the wonks right? In short: we think not.
Why do so many wonks love principal reduction? Because they think principal reduction prevents foreclosures at no cost to anyone—not taxpayers, not banks, not shareholders, not borrowers. It is the quintessential win-win or even win-win-win solution. The logic of principal reduction is that in a foreclosure, a lender recovers at most the value of the house in question and typically far less. This is because of the protracted foreclosure process during which the house deteriorates and the lender collects no interest but has to pay lawyers and other staff to navigate 50 different byzantine state bureaucracies to get a clean title to the house, which it then has to sell in an extremely weak market. In contrast, reducing the principal balance to equal the value of the house guarantees the lender at least the value of the house because the borrower now has positive equity and research shows that borrowers with positive equity don’t default. To put numbers on this story, suppose the borrower owes $150,000 on a $100,000 house. If the lender forecloses, let's assume it collects, after paying the lawyers and the damage on the house, etc., $50,000. However, if it writes principal down to $95,000, it will collect $95,000 because the borrower now has positive equity and won't default on the mortgage. Lenders could reduce principal and increase profits!
The problem with the principal reduction argument is that it hinges on a crucial assumption: that all borrowers with negative equity will default on their mortgages. To understand why this assumption is crucial to the argument, suppose there are two borrowers who owe $150,000 but one prefers not to default (perhaps because she has a particularly strong preference for her current home, or because she does not want to destroy her
credit, or because she thinks there's a chance that house prices will recover) and eventually repays the whole amount while the other defaults. If the lender writes down both loans, it will collect $190,000 ($95,000 from each borrower). If the lender does nothing, it will eventually foreclose on one and collect $50,000, but it will recover the full $150,000 from the other borrower, thus collecting $200,000 overall. Hence, in this simple example, the lender will obtain more money by choosing to forgo principal reduction.
The obvious response is that the optimal policy should be to offer principal reduction to one borrower and not the other. However, this logic presumes that the lender can perfectly identify the borrower who will pay and the borrower who won't. Given that there is a $55,000 principal reduction at stake here, the borrower who intends to repay has a strong incentive to make him- or herself look like the borrower who won't!
This is an oft-encountered problem in the arena of public policy. Planners often have a preventative remedy that they have to implement before they know who will actually need the assistance. This inability to identify the individuals in need always raises the cost of the remedy, sometimes dramatically so. A nice illustration of this problem can be seen in the National Highway Traffic Safety Administration's (NHTSA) proposed regulation to require all cars to have backup cameras to prevent drivers from running over people when they drive in reverse. Hi-tech electronics mean that such cameras cost comparatively little: $159 to $203 for cars without a pre-existing navigation screen, and $53 to $88 for cars with a screen, according to the NHTSA. $200 seems like an awfully small price to pay to prevent gruesome accidents that are often fatal and typically involve small children and senior citizens. But the NHTSA says that the cameras are actually extremely expensive, and arguably prohibitively so. What gives? How can $200 be considered a lot of money in this context? The problem is that backup fatalities are extremely rare, something on the order of 300 per year, so the vast majority of backup cameras never prevent a fatality. To assess the true cost, one has to take into account the fact that for every one camera that prevents a fatality, hundreds of thousands will not. Done right, the NHTSA estimates the cost of the cameras between $11.3 and $72.2 million per life saved.
The idea of principal reduction starts with a correct premise: borrowers with positive equity—that is, houses worth more than the unpaid principal balance on their mortgages—rarely ever lose their homes to foreclosure. In the event of an unexpected problem (like an unemployment spell) that makes the mortgage unaffordable, borrowers with positive equity can profitably sell their house rather than default. The reason that foreclosures are rare in normal times is that house prices usually increase over time (inflation alone keeps them growing even if they are flat in real terms) so almost everyone has positive equity. What happened in 2006 is that house prices collapsed and millions of homeowners found themselves with negative equity. Many who got sick or lost their jobs were thus unable to sell profitably.
With this idea in mind, it then follows that if we could somehow get everyone back into positive-equity territory, then we could end the foreclosure crisis. To do that, we either need to inflate house prices, which is difficult to do and probably a bad idea anyway, or reduce the principal mortgage balances for negative-equity borrowers. So we have a cure for the foreclosure crisis: if we can get lenders to write down principal to give all Americans positive equity in their homes, the housing crisis would be over. Of course, the question becomes, who will pay? Estimates suggest that borrowers with negative equity owe almost a trillion dollars more than their homes are worth, and a trillion dollars, even now, is real money. The principal reduction idea might stop here—an effective but unaffordable plan—but people then realized that counting all the balance reduction as a cost was wrong. Furthermore, in fact, not only was the cost far less than a trillion dollars, but, as we noted above, many principal reduction proponents argue that it might not cost anything at all.
The logic that principal reduction can prevent foreclosures at no cost is compelling and seductive, and proposals to encourage principal reduction were common early in the foreclosure crisis. In a March 2008 speech, one of our bosses, Eric Rosengren, noted that "shared appreciation" arrangements had been offered as a way to reduce foreclosures; these arrangements had the lender reduce principal in return for a portion of future price gains realized on the house. In July 2008, Congress passed the Housing and Economic Recovery Act of 2008, which created Hope for Homeowners, a program that offered government support for new loans to borrowers if the lender was willing to write down principal.
While we were initially supportive of principal-reduction plans, we began to have doubts over the course of 2008. Our reasons were twofold. First, we could find no evidence that any lender was actually reducing principal. Commentators blamed the lack of reductions on legal issues related to mortgage securitization, but we became skeptical of this argument, because the incidence of principal reduction was so low that it was clear that securitization alone could not be the only problem or even a major one, (Subsequent research has shown this to be largely right: the effect of securitization on renegotiation was between nil and small in this crisis, and lenders did not reduce principal much even during the Depression, when securitization did not exist.) And the second issue, of course, was our realization of the logical flaw described above.
Negative equity and foreclosure
But aren't we being pessimistic here? Aren’t we ignoring research that shows that negative equity is the best predictor of foreclosure? No, we aren't. On the contrary, we have authored some of that research and have long argued for the central importance of negative equity in forecasting foreclosures. But what research shows is not that all or most people with negative equity will lose their homes but rather that while people with negative equity are much more likely to lose their homes, most eventually pay off their mortgages. The relationship of negative equity to foreclosure is akin to that of cholesterol and heart attacks: high cholesterol dramatically increases the odds of a heart attack, but the vast majority of people with high cholesterol do not have heart attacks any time in the near or even not-so-near future.
To be sure, there are some mortgages out there with very high foreclosure likelihood: loans made to borrowers with problematic credit and no equity to begin with, located in places where prices have fallen 60 percent or more. However, such loans are quite rare now—most of those defaulted soon after prices started to fall in 2007—and make up a small fraction of the pool of troubled loans currently at risk. To add to the problem, the principal reductions required to give such borrowers positive equity are so large that the $20–25 billion figure discussed for the new program would prevent at most tens of thousands of foreclosures and make only a small dent in the national problem.
Millions of borrowers with negative equity will default, but there are many millions more who will continue to make payments on their mortgages, behavior that is not, contrary to popular belief, a violation of economic theory. Economic theory only says that borrowers with positive equity won’t default (read it carefully). It is logically false to infer from this prediction that all borrowers with negative equity will default. "A implies B" does not mean that "not A" implies "not B," as any high school math student can explain. And in fact, standard models show that the optimal default threshold occurs at a price level below and often significantly below the unpaid principal balance on the mortgage.
The problem of asymmetric information
Ultimately the reason principal reduction doesn't work is what economists call asymmetric information: only the borrowers have all the information about whether they really can or want to repay their mortgages, information that lenders don’t have access to. If lenders weren't faced with this asymmetric information problem—if they really knew exactly who was going to default and who wasn't—all foreclosures could be profitably prevented using principal reduction. In that sense, foreclosure is always inefficient—with perfect information, we could make everyone better off. But that sort of inefficiency is exactly what theory predicts with asymmetric information.
And, in all this discussion, we have ignored the fact that borrowers can often control the variables that lenders use to try to narrow down the pool of borrowers that will likely default. For example, most of the current mortgage modification programs (like the Home Affordable Modification Program, or HAMP) require borrowers to have missed a certain number of mortgage payments (usually two) in order to qualify. This is a reasonable requirement since we would like to focus assistance on troubled borrowers need help. But it is quite easy to purposefully miss a couple of mortgage payments, and it might be a very desirable thing to do if it means qualifying for a generous concession from the lender such as a reduction in the principal balance of the mortgage.
Economists are usually ridiculed for spinning theories based on unrealistic assumptions about the world, but in this case, it is the economists (us) who are trying to be realistic. The argument for principal reduction depends on superhuman levels of foresight among lenders as well as honest behavior by the borrowers who are not in need of assistance. Thus far, the minimal success of broad-based modification programs like HAMP should make us think twice about the validity of these assumptions. There are likely good reasons for the lack of principal reduction efforts on the part of lenders thus far in this crisis that are related to the above discussion, so the claim that such efforts constitute a win-win solution should, at the very least, be met with a healthy dose of skepticism by policymakers.
Senior economist and policy adviser at the Boston Fed
Research economist and assistant policy adviser at the Federal Reserve Bank of Atlanta
Research economist and policy adviser at the Boston Fed
December 21, 2010
Revisiting real estate revisionism: Concessionary mortgage modifications during the Depression
During the current foreclosure crisis, lenders have seemed far more willing to foreclose on delinquent borrowers rather than offer them loan modifications. Some commentators have argued that this was not always the case. They claim that loan modifications are infrequent today because so many loans have been securitized, and thus are not owned by any one person or firm. They also say that the modern securitization process reduces loan modifications because securitization separates the entity that makes the modification decision—that is, the mortgage servicer—from the entities that gain the most if a foreclosure is avoided—that is, the mortgage investors. As we pointed out in our last post, Yale economist John Geanakoplos and Boston University law professor Susan Koniak argued in a March 2008 New York Times op-ed that the uncomplicated relationship between banks and borrowers in the good old days allowed the banks to work out modifications when their borrowers ran into trouble.
The Congressional Oversight Panel, created by Congress in October 2008 to "review the current state of financial markets and the regulatory system," expressed a similar belief in a March 2009 report on the state of the U.S. housing market:
For decades, lenders in this circumstance [that is, with troubled borrowers] could negotiate with can-pay borrowers to maximize the value of the loan for the lender (100 percent of the market value) and for the homeowner (a sustainable mortgage that lets the family stay in the home). Because the lender held the mortgage and bore all the loss if the family couldn't pay, it had every incentive to work something out if a repayment was possible.
Even in the good old days, lenders reluctant to restructure
Such claims, however, have usually been made with little or no reference to supporting research. Fortunately, a recent paper by Andra Ghent of Baruch College exploits a new data set to shed considerable light on this topic. Her findings argue against the idea that lender reluctance to modify is a recent phenomenon.
Ghent uses a data set from the National Bureau of Economic Research (NBER) that covers mortgages from 1920 to 1939, a period that encompasses the massive housing turmoil of the Great Depression. The data set consists of "mortgage experience cards," which the NBER collected in the 1940s from mortgage lenders in the New York metropolitan area. On the cards are the answers to short questionnaires about the characteristics of individual mortgage loans (see page 5 of Ghent's paper for an example). The cards also contain explicit information about any loan modifications, including the date of the modification and whether it was principal reduction, interest-rate reduction, change to the amortization schedule, or something else. The cards include loans from three types of mortgage lenders: life insurance companies, savings and loans, and commercial banks.1
Ghent finds few modifications in these cards, and these few were not particularly generous. Using a fairly conservative definition of what constitutes a concessionary modification, Ghent finds that approximately 5 percent of loans originated between 1920 and 1939 were modified, while 14 percent were terminated by foreclosure or a deed-in-lieu of foreclosure (the latter occurs when the owner surrenders the house to the lender without going through the foreclosure process). Of the loans that received a concessionary modification, about 40 percent received an interest rate reduction, which Ghent defines as an interest rate cut of at least 25 basis points (relative to origination) resulting in a new rate that is at least two standard deviations below the average interest rate on newly originated loans. The average rate reduction was only 78 basis points below the prevailing interest rate of new originations, suggesting that interest rate cuts were not particularly generous.
Another 40 percent of the modified loans received reductions in their amortization schedules, which would have likely decreased the required mortgage payments. However, Ghent points out that most of these extended amortizations occurred before 1930. In the period 1930–32, when house prices fell and unemployment rose the most, this type of modification was rare.
Principal balance reductions—and increases
Ghent also finds that less than 2 percent of all loans received principal balance increases. She argues that such increases may correspond to instances of forbearance. Forbearance occurs when a lender reduces the required mortgage payment for a short period. At the end of the period, the lender adds the arrears back to the loan balance. We have a minor quibble on this point: today, forbearance is not considered a permanent concessionary modification when the lender does not have to write down any debt.
What about principal reductions? Perhaps the most surprising finding is that the data set shows no instances of principal reduction in the New York City metropolitan area and only a handful of instances in a broader sample that includes the entire states of Connecticut, New Jersey, and New York over a similar period. To us, this low number of principal reductions is compelling evidence that even Depression-era lenders were averse to renegotiating with troubled borrowers, just as lenders are today.
Balloon mortgages sank some borrowers
Another interesting finding concerns the refinancing decisions of lenders. Short-term balloon mortgages were more common in the 1920s and 1930s than they are today, and various scholars have linked the high foreclosure rate of the Depression to the unwillingness of lenders to refinance these mortgages when they came due. In fact, lender reluctance to refinance maturing mortgages is often used to explain the existence of the Home Owners' Loan Corporation (HOLC), a government organization set up in the early 1930s to refinance troubled mortgages. Ghent revisits this hypothesis with her data, measuring the frequency at which short-term balloon mortgages ended in foreclosure. She finds that balloon mortgages that were about to expire did indeed experience increased rates of foreclosure (see Ghent's table 4). However, this relationship only exists during the years when HOLC was purchasing a great many loans (1933–35). In other years, balloon mortgages were no more likely to end in foreclosure than other loans.
To us, this finding suggests a "HOLC effect." While HOLC was actively buying loans, private lenders may have refused to roll them over so that the borrowers would qualify for a HOLC refinance. If they did, then the lenders would be paid close to par for the loans by the government (see our previous post about the generosity of the HOLC program). In particular, the lenders received what were effectively government bonds in return for their mortgage. While these bonds carried lower interest rates, they carried vastly less credit risk as well.
To explain her findings, Ghent points to information problems between borrowers and lenders. In particular, lenders may not have known which borrowers were likely to truly need modifications, nor did they know with certainty which borrowers were likely to re-default if a modification were offered. Note that these information problems must have been quite severe. The national unemployment rate hit 10.8 percent in November 1930 and stayed in double digits for more than a decade. In this environment, a borrower asking for a modification was quite likely to really need one. The fact that lenders made few modifications suggests some strong intrinsic hurdles to renegotiation when information between borrowers and lenders is less than perfect.
Old problems, new analysis
The crucial policy question is what the Depression-era reluctance of lenders to renegotiate teaches us about today's foreclosure crisis. Ghent surmises that the information problems are less of an issue in the current environment, but we disagree. Even with better data and screening technology, today's lenders face significant information problems when deciding on modifications. Moreover, Ghent's paper is also informative on the role of securitization in reducing modifications. Even when individual lenders owned entire loans, modifications were rare.
All told, Ghent's paper is full of solid analysis on a topical subject. And while she doesn't quite go this far, we believe that her findings not only confirm the importance of information problems, but also they may bury the notion that securitization is the primary obstacle to renegotiation in the current foreclosure crisis.
Research economist and assistant policy adviser at the Federal Reserve Bank of Atlanta
Chris Foote, senior economist and policy adviser at the Federal Reserve Bank of Boston
1 Ghent argues that the data set probably provides a representative sample of loans held by life insurance companies and commercial banks in the 1920s and 1930s, but is less likely to be representative of loans held by savings and loans due to a survivorship bias. Unlike life insurance companies and commercial banks, savings and loans were not able to reliably report data on their inactive loans at the time of the survey.
November 15, 2010
Mortgage relief in the Great Depression
Bemoaning the unwillingness of lenders to renegotiate loans in the current mortgage crisis, critics often point to the "old days" when, they argue, foreclosures were a rarity because of a different institutional setup. John Geanakoplos and Susan Koniak took this view in an op-ed they wrote for The New York Times.
In the old days, a mortgage loan involved only two parties, a borrower and a bank. If the borrower ran into difficulty, it was in the bank's interest to ease the homeowner's burden and adjust the terms of the loan. When housing prices fell drastically, bankers renegotiated, helping to stabilize the market.
Luigi Zingales uses almost the same language to make the same argument in an article in The Economist's Voice.
In the old days, when the mortgage was granted by your local bank, there was a simple solution to this tremendous inefficiency. The bank forgave part of your mortgage….
But what evidence do we have to back up these claims? The authors do not provide any direct evidence, do not provide a source for the evidence, and are not clear about what exactly they mean by "the old days." Until recently, there was little hard evidence on the subject. However, the simple existence of foreclosure crises in the past—in New England in the 1990s, for example, and across the nation during the Depression—is, at least on the surface, evidence that large numbers of mortgages escaped the seemingly win-win solution of modification even in the past.
In the last two years, two researchers, Andra Ghent of Baruch College and Jonathan D. Rose, an economist with the Federal Reserve System's Board of Governors, have gone to the records from the Depression, in one possible definition of "the old days," to see if, indeed, lenders renegotiated on a wide scale. Looking at the Depression gives us a good opportunity to test the theory that our current set of institutions are the problem because the institutional setup was different during the Depression—and if there was ever a profitable opportunity to modify loans, it was the period from 1932 to 1939. The extent of the crisis at the time minimized the information problems that we argue prevent profitable modifications now. Even if some borrowers who received modifications then could have afforded to repay their loans or more than their modified loans, there were so many deeply delinquent borrowers that the gains on the rest should have made up for it.
Comparing the 1930s Home Owners Loan Corporation to today's programs
In this post, we focus on the paper by Rose, who explores the Home Owners Loan Corporation (HOLC), a federal program aimed at transitioning troubled borrowers into new loans. (We will focus on Ghent's paper in the next post.) Many of today's critics have held up HOLC as an example of enlightened government policy; it was the model of the Hope for Homeowners (H4H) program enacted by Congress in 2008. Rose argues in his paper that, contrary to popular belief, HOLC actually did not offer particularly good terms to borrowers and instead focused mostly on assisting banks.
The last time that national housing prices crashed as low as they have over the past three to four years was during the Depression. As with today's crash, the 1930s fall coincided with a rash of mortgage defaults and foreclosures. According to Wheelock (2008), by 1933, 13.3 of every 1,000 mortgages in the United States was in foreclosure, and by the beginning of 1934, almost half of all outstanding urban home mortgages were delinquent. To stem the rising tide of foreclosures, the Roosevelt Administration created HOLC in 1933. Over the following three years, HOLC purchased and refinanced more than 1 million delinquent home loans. Although 1 million loans may not seem like that many, keep in mind that the U.S. mortgage market was significantly smaller 80 years ago and that current government programs have permanentlymodified or refinanced far fewer than 1 million loans during this crisis.
HOLC was a voluntary program that aimed to prevent foreclosures by refinancing troubled borrowers into mortgages that were more affordable. The program accepted applications from borrowers from June 1933 to November 1934, and then again from May to June 1935. Rather than paying cash for the mortgages, HOLC exchanged its own bonds for the lender's claim on the underlying house. The tax-exempt bonds were essentially equivalent to U.S. Treasury securities and thus could be considered very low-risk assets, especially relative to mortgage debt at the time. After HOLC purchased the loan from the lender, it would issue a new 15-year, fully amortizing mortgage at an interest rate of between 4.5 and 5 percent. The HOLC loans contained no prepayment penalties and had an interest-only option for the first three years. Thus, in most cases, a HOLC refinance gave the borrower a more affordable mortgage by lowering the interest rate and stretching out payments. Like modification programs today, HOLC tried to help borrowers in financial duress, discouraging applications from borrowers who just wanted a lower interest rate or borrowers for whom a refinance from a private lender was a viable alternative.
Did HOLC inflate home appraisals to encourage lenders to modify loans?
Until Rose did the research for his paper, a lack of data—other than a handful of aggregate statistics—prevented us from knowing much about HOLC activities. However, Rose was able to obtain loan-level data for a sample of HOLC loans from New York, New Jersey, and Connecticut.1 His goal was to analyze how HOLC encouraged lenders to part with their loans—after all, although HOLC bonds were much less risky than the mortgage debt that lenders held on their portfolios at the time, the bonds also carried lower interest rates. Thus, some 1930s lenders could have decided to take their chances with their old mortgages and refuse participation in the government's program. One key factor affecting the lender's decision was the amount of mortgage debt that HOLC was willing to refinance, which because of a combination of law and HOLC policy, was only 80 percent of the value of the property as estimated by a HOLC appraisal. If the amount of the new HOLC mortgage was lower than the old mortgage, then a participating lender would receive a "haircut" on the loan and the borrower would receive a principal reduction in addition to a lower interest rate and longer maturity schedule.2
Rose's main finding is that HOLC seems to have recognized that placing a low value on the house would make it more likely that the lender would have to offer a principal reduction, so that a low appraisal would reduce the chance that the lender would participate in the program. As a result, Rose argues, HOLC tended to place high values on properties in its appraisal process. This practice was good for lenders, who, in many cases, were paid in full for their mortgages. But high appraisals were bad for borrowers, because they made principal reductions less likely.3
A strength of the Rose paper is a careful explanation of how HOLC appraisals came to be relatively high. The HOLC appraisal formula consisted of three components. The first was the estimated present market value of the property, as in today's appraisals. The second was the estimated cost of purchasing the lot and constructing a similar structure. The third component was capitalizing the estimated monthly rental value of the property over the past ten years over a ten-year period assuming no discount rate. HOLC averaged these three measures to determine the final appraised value.
Because of the dramatic decline in housing values at the beginning of the Depression, the second and third measures were typically higher than the first one, the market-based measure, which resulted in appraisals being higher than market values on average. According to Rose's data, which consists of loan applications that HOLC accepted and mortgages that they refinanced, the appraisals exceeded the market-value estimates almost 74 percent of the time and equaled the market-value estimates approximately 8 percent of the time. The value that came out of this process was not necessarily the actual value the organization used, however. HOLC performed two additional reviews (at the district and then the state level) on each application to guard against any obvious errors. These two reviews were highly subjective. HOLC's policy was that these reviews could lower the final appraisal without bound but could raise the appraisal only by 10 percent. According to Rose's analysis, the final appraisal exceeded the market value estimate in 58.5 percent and equaled it in 10.6 percent of the cases, showing that the review process was proactive in adjusting the values that came out of the three-component appraisal formula. Even more compelling is the fact that almost one-third of the HOLC refinances had amounts that exceeded 80 percent of the estimated market value of the property, while HOLC regulations meant that none had amounts that exceeded 80 percent of the final appraisal.
Inflated appraisals helped keep Depression-era banks solvent, at the expense of homeowners
We view these findings as convincing evidence that HOLC was inflating appraisals in order to increase lender participation rather than directly reducing principal or trying to make lenders take write-downs. Rose takes this reasoning a step further and concludes that the inflated appraisals were motivated by the desire to keep banks and other lending institutions solvent, at the expense of mortgage borrowers. While he cannot offer a straightforward way to confirm this interpretation, Rose does offer some tantalizing contemporaneous quotations to support it. For example, he includes this quote from one of the HOLC loan examiners:
There seems to be a deliberate effort made by the Connecticut officials to make high appraisals with the purpose of holding up real estate values. We have had this suspicion confirmed in a recent interview with the State Counsel, Mr. Tierney. This gentleman, during a call in our office last month, stated that they believed it necessary, to prevent depreciation of realty value as much as possible so as to maintain the soundness of the banks and other financial institutions which had made mortgage loans during the past 5 years, to make high appraisals. His opinion was that many of these financial institutions would be today in an unsound condition if their mortgage loans were appraised on a basis of today's realty values. This statement is illuminating when appraisals by our Connecticut offices are being analyzed. (p. 19)
One potential problem with Rose's interpretation is that it assumes HOLC didn't negotiate to the fullest possible extent with lenders. That may be true, but it's also possible that lenders were unwilling to substantially write down loans, which would have forced HOLC to maximize lender participation by paying high prices.
What does the HOLC experience teach us about the current foreclosure situation?
Lenders today still seem reluctant to modify large numbers of troubled loans. In the Depression, HOLC solved the problem of lender reluctance with high appraisals and by essentially transferring a large amount of mortgage credit risk from the private sector to the public sector. By contrast, in today's Home Affordable Mortgage Modification (HAMP) program, government payments encourage a modification only when the modification is determined to be a win-win proposition for both the borrower and the lender. The small number of modifications to date may suggest that the number of win-win modifications is low. In other words, just as in the Depression, today's lenders may be willing to take their chances with existing mortgages rather than offer generous concessionary modifications to borrowers.
We find the HOLC policy of refusing to directly reduce mortgage principal to be potentially informative to the current modification debate in another way. Principal reductions appear to have been as rare in the 1930s as they are today (more on this in our post about the Ghent paper). Many have blamed securitization by private institutions for this pattern today, but if securitization were the real culprit, how do we explain a similar lack of principal reduction in a period when securitization was basically nonexistent?
3 Of course, even borrowers who did not receive principal reductions were helped because they could swap their short-term balloon mortgages with longer-term HOLC loans. A longer amortization period tends to lower monthly payments, which make homeownership more affordable. Borrowers who could not roll over balloon mortgages when they came due no doubt found HOLC mortgages particularly helpful in preventing foreclosure.
October 20, 2010
Securitized mortgage loan or not, lenders are not restructuring
In a new paper, Agarwal, Amromin, Ben-David, Chomsisengphet, and Evanoff (2010) finally put to rest the widespread belief that securitization massively exacerbated the foreclosure crisis by preventing lenders from renegotiating loans. The authors show that the data do not support the argument articulated by Paul Krugman and Robin Wells in the New York Review of Books:
In a housing market that is now depressed throughout the economy, mortgage holders and troubled borrowers would both be better off if they were able to renegotiate their loans and avoid foreclosure. But when mortgages have been sliced and diced into pools and then sold off internationally so that no investor holds more than a fraction of any one mortgage, such negotiations are impossible.
This post is the first in a three-part series in which we discuss recent studies, including that of Agarwal et al. (2010), providing evidence that the low modification rate has not resulted from an excess of securitized loans, what we call the "institutional view." These studies show, rather, that the low rate comes from lenders having imperfect information. This view—the "information view"—holds that lenders cannot determine whether a delinquent borrower will default even if the lender makes concessions.
While Agarwal et al. (2010) find that lenders fail to renegotiate 93 percent of seriously delinquent securitized mortgages, they also find that the figure drops only to 90 percent for portfolio loans without the supposed problems generated by securitization. Whether that 3-percentage-point difference really reflects securitization frictions is disputable, as we discuss below. But since most renegotiated mortgages fail anyway, it means that the elimination of securitization frictions would at most have reduced the number of foreclosures by less than 2 percent. The authors clearly show that Krugman and Wells and others who argue that securitization frictions were generating millions of unnecessary foreclosures are way, way off base. Securitization may or may not inhibit renegotiation, but most troubled borrowers cannot blame it for their situation, since their lender probably would not have helped them even if the lender owned the loan free and clear.
The trouble with imperfect information
We mention above the two schools of thought about why lenders are reluctant to renegotiate. Proponents of the institutional view argue that securitization creates perverse incentives for mortgage servicers, the agents that collect monthly mortgage payments from borrowers and who are given the responsibility for renegotiating troubled loans. In short, the institutionalists argue that servicers gain little from successful loan modifications, even though the ultimate owners of the mortgages (that is, the investors in the MBS) gain a lot. They claim that so few modifications take place because the incentives of mortgage servicers and investors were not properly aligned when the MBS was created.
The information view, on the other hand, holds that lenders face a difficult decision whenever they are confronted with a delinquent borrower, and they cannot easily predict which of three groups a delinquent borrower belongs to. One group of delinquent borrowers will "cure" on their own, becoming current on their loans without a modification. Another group will wind up defaulting even if they are given a modification. A third group will default without a modification but will remain current if their loans are modified. In other words, only modifications in the third group are profitable for lenders.
Unfortunately, lenders don't have the perfect information needed to place each borrower in the appropriate group. Lenders' profit-maximizing strategy may well make them stingy with modifications in general. Low modification rates mean that many borrowers in the third group will lose homes that could have been saved with a modification. But the low rates also mean that the lender does not incur losses by awarding modifications to borrowers in the first two groups.
Note that those who hold the information view argue that securitization is not an important issue because both MBS investors and owners of whole mortgages face the same information problems when deciding whether a modification is worthwhile.
Compelling evidence for the information view
Agarwal et al. (2010) do not explicitly aim to distinguish between the institutional and information views, but they do provide what we believe to be compelling evidence in favor of the information view. The researchers used a comprehensive database of troubled mortgages, known as the Mortgage Metrics database, to assess loss mitigation efforts by mortgage servicers in all of 2008 and the first five months of 2009. The Mortgage Metrics database contains detailed information on exactly how servicers handled delinquent loans for a wide range of institutions. (Other data sets force researchers to infer whether a modification was made from auxiliary information such as the interest rate or remaining maturity of the loan.) For example, the authors were able to measure how likely servicers were to offer borrowers repayment plans, in which arrears are tacked on to the remaining balance of the loan, as compared with offering concessionary modifications, such as interest-rate cuts or principal reductions. They were also able to determine whether lenders initiated and completed foreclosures or allowed borrowers such exit strategies as deeds-in-lieu-of-foreclosure, and whether lenders did nothing at all, waiting to see if troubled borrowers eventually cure on their own.
Most importantly, the database contains an extensive list of attributes of the troubled loans, which permitted the authors to look at the relationship between the likelihood of modification and such loan-level attributes as the borrower's credit history, and whether the loan was held in an MBS or in a lender's individual portfolio.
As we noted above, lenders sometimes offer delinquent borrowers repayment plans, giving them the chance to repay the loan under the original terms of the mortgage. Significantly, the repayment plan requires borrowers to pay back any arrears, usually with interest. Lenders may also offer troubled borrowers forbearance, which means the borrowers pay lower payments for some time and then make up the arrears at the end of the forbearance period. These two types of mortgage help are temporary measures aimed to help the troubled borrower through a difficult period. By contrast, loan modifications are specific, permanent changes to the terms of a mortgage after origination.
Among other things, the Agarwal et al. (2010) paper invalidates the argument that a focus on modifications is too narrow, proposed by Piskorski, Seru, and Vig (2010) and Mayer (2010: 18), and that other methods, like repayment plans and forbearance, were important forms of loan renegotiation. Table 2 in the paper shows quite definitively that loan modifications accounted for the vast majority of the resolutions of troubled loans that did not involve foreclosure proceedings during the crisis period.
"One message is quite clear: Lenders rarely renegotiate"
The paper has three additional major findings, one of them that loan modifications are indeed rare. According to Table 1.A, fewer than 10 percent of borrowers received a loan modification in the first six months after becoming 60-days delinquent (missing two mortgage payments). In other words, 90 percent of borrowers who became delinquent received no substantive assistance from the lender. This finding mirrors Adelino, Gerardi, and Willen (2010), who calculated the frequency of modification using a different data set over a slightly different time period. The Adelino, Gerardi, and Willen (2010) paper also reports a modification frequency under 10 percent, and concludes, "No matter which definition of renegotiation we use, one message is quite clear: lenders rarely renegotiate."
Another finding of the Agarwal et al. (2010) paper sheds some light on the debate between institutional and information explanations for the infrequency of modifications. Although securitization seems to have had some effect on the likelihood of modification in their data, the effects are economically small and difficult to interpret. Table 3 shows that loans securitized either by the government-sponsored enterprises (GSE), such as Fannie Mae and Freddie Mac, or by private institutions, which often handled subprime or jumbo loans, were between 3 and 6 percent less likely to receive a modification than were whole loans held in the portfolios of banks.
In some sense, this finding is evidence for the institutional school, since loans in MBSs were less likely to receive modifications. But while the 3- to 6-percentage-point difference is large relative to the overall modification rate, it is still small relative to the total number of troubled loans. Essentially, servicers do nothing to help 90 percent of delinquent private-label borrowers, compared to 87 percent of portfolio loans. Even if we assume that the entire 3-percentage-point difference between portfolio and private-label loans is a treatment effect related to problematic incentives in private securitization contracts (pooling and servicing agreements), it is still just 3 percent of delinquent mortgages. Moreover, given the extremely high redefault rates that have characterized modifications during this period, this difference translates into a reduction in foreclosure frequency of less than 2 percent. In other words, under this (extreme) assumption, if we solved all of the issues with private securitization contracts, we could prevent 2 percent of the foreclosures.
No evidence for causal link between securitization and modification
Even this measure of the effect of poor institutional incentives may be too big. There are at least two good reasons to doubt a truly causal relationship between private securitization contracts and the frequency of renegotiation. The first reason is that, as the Agarwal et al. (2010) paper finds, loans securitized by the GSEs were actually much less likely to receive a modification than even the privately securitized loans. The conventional wisdom on the link between securitization and renegotiation (see Piskorski, Seru, and Vig 2010) pointed the finger at specific details in private securitization contracts that failed to align the incentives of servicers and investors. But this story applies only to privately securitized loans, not to agency loans. None of the institutional "facts" that the Piskorski, Seru, and Vig (2010) paper proposes apply to the GSEs, since the GSEs retain all of the credit risk when they securitize a loan. When a GSE loan becomes delinquent, it effectively turns into a portfolio loan. The GSEs have full discretion to modify any loan at any time for any reason and stand to enjoy all of the benefits. Agarwal et al. (2010) point out that the "precarious financial position of the GSEs in 2008 prior to their conservatorship may have made it difficult for them to engage in modifications and the attendant loss recognition," but this argument applies to only half of the period under study. After conservatorship started in September of 2008, capital was no longer a concern for the GSEs.
The second reason to doubt a causal link between securitization and modification is that the financial crisis triggered by the failure of Lehman Brothers and the ensuing heavy intervention by the federal government make it problematic to view behavior after September 2008 as "market-based approaches to stem mounting mortgage losses" (Agarwal et al. 2010, 1) By October 2008, the Troubled Asset Relief Program (TARP) had become law, and the government effectively owned stakes in many of the major commercial and investment banks. These banks also happened to be the largest mortgage servicers. In fact, TARP explicitly linked the provision of assistance to banks on their willingness to assist borrowers. That JP Morgan announced in February 2009 a foreclosure moratorium in a letter to Congressman Barney Frank, the head of the congressional committee tasked with overhauling regulation of their industry, illustrates the political considerations in dealing with troubled mortgages. Thus, by the end of 2008, political considerations played a central role in any calculation of the relative merits of renegotiating or foreclosing on a loan. For this reason, Adelino, Gerardi, and Willen (2010) focus on the period prior to September 2008. They find that, while the overall likelihood of modification is roughly the same, the difference in modification activity attributable to private-label mortgages is much smaller: only 1 percentage point.
Finally, Agarwal et al. (2010) show that information asymmetries matter, which is the third main finding of the paper. A key impediment to renegotiation is the self-cure risk, or the possibility that a delinquent borrower will resume repayment and eventually cover the balance of the loan in full. Any concession the lender made to such a borrower would thus be wasted. The authors show evidence of precisely this mechanism at work, finding
...much lower rates of modification for troubled borrowers with higher FICO scores and lower LTV ratios, which is the group with ex ante greatest likelihood of self-curing their delinquency.
The growing literature disputing the institutional argument
In showing that information, not institutions, is at the heart of the renegotiation issue, the authors build on an increasingly large body of evidence, which includes the Adelino, Gerardi, and Willen (2010) paper mentioned above. They are further supported by Ghent (2010) and Rose (2010), who both debunk the myth that the absence of securitization facilitated widespread renegotiation during the Depression (more on this topic in upcoming posts). In fact, Wechsler (1984)1 shows that many of today's anti-deficiency laws, which limit the ability of lenders to pursue borrowers for the difference between the loan balance and the amount recovered in foreclosure, originated as a policy response to the particularly harsh treatment of defaulting mortgagors during the Depression. Moreover, Hunt (2010) exhaustively studied securitization contracts and found little to support for the claim that private securitization explicitly distorts the incentives of servicers of securitized loans as compared to portfolio loans, writing that:
Certain general standards are extremely common [in private securitization contracts]: Servicers typically must...act in the interests of investors, and service loans in the same manner as they service their own loans.
Finally, we note the work of Mayer, Morrison, Piskorski, and Gupta (2010), which perfectly illustrates the difficulty in identifying borrowers who are truly in financial distress and thus suitable for a loan modification. The authors find that the announcement of a generous loan modification program caused borrowers to default on their mortgages.
It is our hope that the Agarwal et al. (2010) paper will put an end to three years of misguided public policy. The appeal of renegotiations was that they appeared to allow policymakers to prevent foreclosures at little cost to investors, lenders, or taxpayers and without unfairly helping anyone. The reality is that preventing foreclosures costs money, and it's time we had a debate about how or whether we want to spend money rather than trying to convince ourselves that we can prevent millions of foreclosures by tweaking the incentives of financial intermediaries.
- Is the Share of Real Estate Sales to Investors Increasing?
- Investigating the Trend in Office Renovations
- Commercial Construction Update: Third-Quarter 2016
- Construction Lending Update: Have the Banks Finally Opened the Spigots?
- Construction Spending Update
- Teachers Teaching Teachers: The Role of Networks in Financial Decisions
- The Pass-Through of Monetary Policy
- Keeping an Eye on the Housing Market
- Do Millennials Prefer to Live Closer to the City Center?
- The Multifamily Market: Is a Hot Market Overheating?
- April 2017
- February 2017
- November 2016
- June 2016
- May 2016
- April 2016
- November 2015
- September 2015
- August 2015
- July 2015
- Affordable housing goals
- Credit conditions
- Expansion of mortgage credit
- Federal Housing Authority
- Financial crisis
- Foreclosure contagion
- Foreclosure laws
- Government-sponsored enterprises
- Homebuyer tax credit
- House price indexes
- Household formations
- Housing boom
- Housing crisis
- Housing demand
- Housing prices
- Income segregation
- Individual Development Account
- Loan modifications
- Monetary policy
- Mortgage crisis
- Mortgage default
- Mortgage interest tax deduction
- Mortgage supply
- Multifamily housing
- Negative equity
- Positive demand shock
- Positive externalities
- Rental homes
- Subprime MBS
- Subprime mortgages
- Supply elasticity
- Upward mobility
- Urban growth