Joseph Breeden’s Post

I would like to humbly offer some model risk management advice to the Federal Reserve in hopes some economists over there might be listening... Whether as a DFAST requirement or simple best practice, lenders of all types create stress test models of their loan portfolios. Over the last decade we have found consumer debt service payments (CDSP) to be particularly valuable. It's not in the DFAST list of economic factors, but neither are used car prices (essential for auto loans), state-level data, etc. So stress test model developers routinely dig a bit deeper into the data at FRED to make their best models. In short, that means that many bank models are dependent on any changes the Fed or other agencies make to their data definitions. I assume that they know this, but I have begun to wonder. Imagine our surprise recently when we compared month-to-month updates of CDSP. (See first graph) The previous definition of CDSP is dramatically different from the new definition post-pandemic. So, we searched FRED and the Federal Reserve websites for an explanation. Well, you won't find anything under CDSP. Instead, you need to search for Consumer Debt Service Ratio. There you will find a proud explanation of this great new definition of Consumer DSR, which is renamed on FRED to CDSP. https://lnkd.in/g--AUtd2 Here's where the model risk management comes in. These are not the same variable. If anyone out there puts the new CDSP into a model built with the old CDSP, it will fail. Some of these are regulatory compliance models required by the Fed. There is no shortage of irony in that. Furthermore, the original CDSP would test as stationary, and therefore suitable in a model without transformation. The new version, starting only from 2006, will definitely not test as stationary. It cannot be used in models unless you first transform it by looking at changes. In our tests, that does not have the same predictive power. Unfortunately, it gets worse. When you download the new CDSP from FRED, you do not get the graph in the Fed release notes, you get the time series in my second image. That is a concatenation of the old definition and the new definition. These are two different variables glued together. Under no circumstances should this time series be used in any modeling. I understand that the economists are proud of this advancement in measuring debt service ratios. They may be right, but it should be given a new name and be treated as a new variable. If I saw a bank do what the Fed and FRED just did, I would absolutely fail their models. For my data science friends out there, I am sorry for your models that must now be rebuilt. We are too.

  • chart, line chart
Rita L. Carroll (黄立瑋)

Fellow, University of Chicago Leadership & Society Initiative | Bank, Board, & Fintech Advisor | Former Chief Operating Officer | Chief Risk Officer | Non- Profit Board of Directors

1mo

Interesting … I would anticipate many consumer models would use more detailed bureau trade line data to compute DTI … as there has always been much debate on student loan debt burden during periods of non payment or even retail private label debt during intro promo periods - thoughts? FRED data would be too crude to use?

Hey Joseph Breeden thanks for the insight. Can we be sure that the Fed does not realise the impact of altering the CDSP definition? Many of the economists over at the Fed are supersmart from my experience. Your concatenation point is spot on.

Like
Reply
Lawrence Mielnicki, Ph.D.

Economist and Risk Modeling Professional

1mo

Of course the model developers don’t take changes in data definition into consideration. But consider this: macro, economic modeling typically uses data that is regularly adjusted after being initially released. This day, the includes GDP, imports, and employment. Modelers will pop in the latest release and make policy decisions. Do they update their databases to account for any revisions and reestimate their coefficient? The GDP numbers take three years to be finalized. The BEA sometimes publishes the Variance in their estimates, and it’s always surprising to see how wide they can be from the earliest estimate. But yeah, all sorts of fiscal policy is made on those first estimates that every economist knows are wrong

Daniel Uhlemann

Strategy-Analytics-Investments ,, Not everything that can be counted counts, and not everything that counts can be counted." by William Bruce Cameron

1mo

Lots of issue are highlighted: - this is why there is a lot of redundant data in the system and in data centres and consequently filtering through training and inference. - the conundrum is that official agency are no longer unbiased so they will create data from data with different weights and lags and variables and cohorts, truncating, windsorizing etc. - now the problem is, the market will react to those aggregated single points wether they represent the truth and better version of the truth or a lie. - that’s why it’s perhaps better to bypass those „official“ portals (where possible and feasible) and go straight to to the raw data collector. Preferably a 3rd party company that specialises in just aggregating those (no point in duplicating data and efforts).

Marc Intrater

Risk Analytics Consultant

2w

Good story and great example of many aspects of model risk management done well * Indicators as start of an investigation: I presume that this was discovered because a normal review of the power of each variable used indicated that CDSP was no longer as powerful as in the past. Many MRM governance might have caught this, but most would have ended here. Very good to see a team keep digging to discover the root cause * The difficulty and importance of good data management. As you say, the economists who created the new indicator had good reason to do so, and very likely the new one is better for many purposes. However when maintaining a data warehouse, especially one as widely used as FRED, it is vital to consider all downstream users. In particular, many users may value stability of definition over *any* potential improvement. At a very minimum, notification of changes need to be PUSHED to users, not merely noted in a footnote. * Need for third party risk assessment: Even as trusted an entity as the Fed (and they are truly deserving of that reputation) is a third party provider whose product needs to be regularly monitored and assessed to ensure that it remains fit for purpose.

Like
Reply
Martin Bohley

Listening Learning Living

1mo

Yes which is why I avoid using any ratios in the FRED datasets and instead use the underlying factors that make up the ratio if it is important to the model. If, as seems in this case, that the ratio involves factors with fundamental assumptions not available then the ratio cannot be used even if it works very well as a factor in the model. Have to look elsewhere for fundamental data with less (never zero) risk of changing. It does make life more difficult and interesting. Of course agree 100% with recommendation to hold data definitions and start a new dataset while maintains old dataset for possibly years as well. Thank You.

Albert Galick

Founder at Systems Behavioral Research

1mo

Joseph Breeden, nice catch! Do I add a step about abutting differently engineered series to my incomplete 12-step program?! Though, in a dynamic economy, should we expect stationarity? Events put individuals into cohorts, which need identified. Complex systems are usually out-of-equilibrium (which my method requires!) A 12-step program investigating time & individuality: 1) track some individual entities 2) use medians, not averages 3) don't eliminate outliers or impute missing values 4) devise diffusion indices & do candlestick charting with pseudo-volume of 1/2 ["buyers"|"sellers"] crossing [above|below] a moving median. A low pseudo-volume breakout means it's past time to act 6) realize a synthetic control must be selected from data of the finest granularity possible 7) use my method to look for hidden states & transitions that might account for dependence on individual history, to identify & characterize cohorts! 8) use my method to look for [markers|factors] that might [reveal|influence] less-reversible, perhaps hidden transitions, towards long-lived states, desirable or otherwise. ... Resetting jumbo ARMs with 4/12 months DQ, June 2008. Best Markov model with added hidden [pay|no-pay|overpay] states and transitions:

  • No alternative text description for this image
Like
Reply

Brings to mind the quote attributed to Josiah Stamp: “The government are keen on amassing statistics. They collect them, add them, raise them to the nth power, take the cube root and prepare wonderful diagrams. But you must never forget that every one of these figures comes in the first instance from the village watchman, who just puts down what he damn pleases.” All data should be presumed to be of dubious quality unless there is convincing evidence to the contrary.

Sarim M. Khan

Independent Sponsor & Operating Partner

1mo

I lost faith in the Fed when in 2021 they said inflation is "transitory"...and my gut instinct made me convince my commercial lender at the time to underwrite a fixed rate note.

Great advice! It was gracious of you to assume they were attentive to how their data was being used. Someone with contacts needs to flash your message in the face of the FED and FRED.

See more comments

To view or add a comment, sign in

Explore topics