Skip navigation

Tag Archives: CVaR

QuantumRisk is starting a new CMBS service – providing econometric loss forecast for CMBS deals.

Why econometric CMBS loss forecasting? Our studies show that

1. The Triangular  Matrix Method is incorrrect (Excel 2007 example).

2. DSCR are not good predictors of defaults (see brochure).

3. Stress testing usually not done correctly (see post).

4. Black Swans can be used to differentiate between two deals with similar cashflows (see brochure).

Contact  Ben Solomon (benjamin.t.solomon@QuantumRisk.com) for more details. Company brochure here.

Advertisements

Goldman’s Profits
Goldman Sachs 2Q 2009 profits of $3.44 billion made the news. Its trading activities was the primary earnings driver with wider profit margins on the buying and selling of securities while everyone else did not make as much. There are news reports where Goldman Sachs denies substantial profits from trading but in this post I analyze the available public data, and necessarily infer that Goldman Sachs increased profits by changing their algorithms.

First full disclosure. I have no connections with Goldman Sachs (GS), don’t know what specifically they do, how they do it or why they do it, and as of today I don’t know anyone at GS.

Lets lay down some facts.


Experts’ Knowledge
A. Use the Artificial Intelligence definition of experts. An expert is that 5% of the population that has 95% of the knowledge of a field, and that the other 95% of this population are non-experts as they have substantially less than 95% of this knowledge. Given that many people have passed through Goldman Sachs we can infer some possibilities:

A1. This trading knowledge (TK) has not left Goldman Sachs. The experts are still at Goldman Sachs and the people who left are knowledgeable but not experts. Or,

A2. This knowledge has left GS. That some of these TK experts at Goldman Sachs are now employed with other companies.

Outcome A1 provides us with little room to infer Goldman Sachs’ TK other than they are very good at holding on to their experts. Outcome A2 then raises some interesting possibilities.


Knowledge Dispersion
B. Second fact. We know from recent news reports that only Goldman Sachs made a ton of profits, and nobody else, or substantially so. Therefore we can infer that the experts who left Goldman Sachs were unable to reproduce Goldman Sachs’ successes, because:

B1. They could not reproduce GS’s knowledge capabilities.

B2. They could reproduce GS’s knowledge capabilities but given the recession were constrained from executing these trading strategies.

Outcome B2 is of little value to us. It would also suggest that the other banks having invested so much in technology, people & processes did not have the confidence in their own people. That would not make sense. So we are left with outcome B1. So what does Goldman Sachs have that nobody else seems to have?


Goldman Changed Their Algorithms
C. Third fact, from a statistical perspective there is only one way to make sustained profits, and this can be divided into 3 steps:

C1. To make a profit on a trade, say some distribution P(x,y)

C2. To make a loss on a trade, say some distribution L(a,b)

C3. To make sure you have more profits than losses in a series of trades or E(P) > E(L)

I must admit here that I am assuming that Goldman Sachs’ TK profitability is sustainable, i.e. that Goldman Sachs has achieved E(P) > E(L).

According to the reported data Goldman Sachs increased their VaR (Value at Risk an industry standard for measuring financial risk) from $240 million (1Q 2009) to $245 million (2Q 2009) or 2%. This is a marginal increase in risk recognition compared to the increase in profits from about $1.84 billion (1Q 2009) to $3.44 billion (2Q 2009) or 87%. But they increased their VaR by 33% from a year ago. The general consensus reported in the news is that Goldman Sachs substantially increased their trading risk.

Having worked real numbers with VaR and CVaR over many, many years I would put forward a different opinion. Given the Wall St. crash of 2008, Goldman Sachs substantially changed their VaR methodology to recognize the underestimation of their trading risk in prior years. This can be seen in their historical data. Between 3Q 2002 and 1Q 2008 VaR normalized for asset size ranged between 0.0122% (2Q 2005) and 0.0189% (2Q 2006). Between 2Q 2008 and 4Q 2008 VaR was increased from 0.0206% to 0.0253%. VaR was again increased to 0.0311% (1Q 2009) and 0.0299% (2Q 2009).

Compare the 2009 VaRs to the last time when the Dow was in the 8,000 to 9,500 range or 3Q 2002 to 4Q 2003. In 2002/2003 Goldman Sachs VaR was 0.0132% (4Q 2002) to 0.0175% (3Q 2003). However, in 2009 Goldman Sachs VaR was between 0.0299% and 0.0311%, or double that in 2003. See Figure 1.

GoldmanSachsHistoricalVaR

Therefore my experience working with VaR and CVaR suggest that Goldman Sachs changed their methodology in 3 stages (2Q 2008, 4Q 2008 & 1Q 2009) but did not alter the riskiness of their asset classes.


Industry Misconception: Probability is a Sufficient Criterion
D. So we can assume that Goldman Sachs’ TK has figured out how to ensure that E(P) > E(L). But if you buy into coherent measures of risk, R, and Black Swans, BS, as I do, you would also add some additional constraints to their trading strategies:

D1. First constraint, E(P) > E(L). That is, it is not sufficient that the probability of profit P(P) be greater than the probability of loss P(L) or P(P) > P(L) is an insufficient condition, because the shape of the return distribution’s tail can significantly alter outcomes. We should note here that on an industry-wide basis quants use this P(P) > P(L) as a sufficient criterion.

D2. Second constraint, the sum of profits S(P) generated from past profitable trades must be significantly greater than the sum of losses S(L) generated from past losing trades or S(P) > S(L). Therefore, E(P) > E(L) versus P(P) > P(L) is a subtle but significant finding.

D3. Third constraint, the 98% loss CVaR or R(L,98%) must be not substantially large or R(L,98%) << 100%.

D4. Fourth constraint, Goldman Sachs does not trade when Black Swans are substantially large or BS(L) >> 0.

The reader may ask what is the significance of D3 or D4? If a large extreme loss is realizable, then it only takes one such trade to eliminate past profits. D1 tells us that for a specific set of trades Goldman Sachs had figured out the statistical long run outcomes. D2 tells us that Goldman Sachs is keeping track of their trading history within their algorithms. And D3 & D4 tells us that they are selective in what they trade.

Many people have suggested that Goldman Sachs use super computers to effect latency differences, but I have heard this since the 70s. So assuming that Goldman Sachs is using super computers, I tend to discount that this is the real reason for super computers. I would think that Goldman Sachs uses super computers to evaluate D1, and some form of D3, on the fly in real time. 


The Ability to Make Money is not the same as Having Money to Make Money
I received a lot of comments from a lot of people. These comments can be summarized into 4 points, Cheap Funds, Insider Information/Conspiracy Theory, Organization, and Market Efficiency. Goldman Sachs did fail and had to be rescued, but the question remains why did they make a ton of profits that made headlines while others did not? Looking at each of the 4 suggestions here are my opinions:

E1. Organization: Goldman’s need for rescue shows that they weren’t organizationally better than any of the other banks.

E2. Market Efficiency: Under severe stress of 2008/2009 markets would not have been efficient, but that would not exclude Goldman Sachs losing money like the other banks did.

E3. Insider Information/Conspiracy Theory: First, that is a very big risk to take especially if you get caught. However, would not the other big banks have had the same ‘advantage’ just by virtue of their size? In my opinion this is foolishness and I am sure GS employees would agree with me. Second, this is an asymmetric problem. You hear of insiders getting caught because they made a good profit from their inside information, but not when the lost money. In general I believe that inside information is over rated

E4. Cheap Funds: To use the army term, cheap funds are a force multiplier. You have to have the ability to make profits before you can amplify those gains. That is why VCs are picky and still they don’t always succeed because they too don’t always get the ‘make’ part right.

Conclusion
My inference is that Goldman Sachs had a trading knowledge that enabled them to make those trading profits. This must have been fairly recent (2008 & 2009) for that knowledge not to have dispersed into the rest of the industry, and the historical data tends to agree with this timing. This mini case study illustrated two very important points, that it is possible to reduce business risk if you can get it right, and that there still are hidden misconceptions that need to be identified and resolved even in a sophisticated environment  like quant based trading.

___________________________________________________________________
Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position.
___________________________________________________________________

The Opportunity:
What would you pay to know the Black Swan of your CMBS deal risk?

Reverse that question!

What would a deal manager pay you to know the Black Swan of his CMBS deal?

The Need:
Inspite the bad publicity surrounding CDOs and structured finance the tranche strucutre is one of the most efficient methods of creating differentiated classes of assets, as The Committee on the Global Financial System explains:

A key goal of the tranching process is to create at least one class of securities whose rating is higher than the average rating of the underlying collateral pool or to create rated securities from a pool of unrated assets. This is accomplished through the use of credit support (enhancement), such as prioritisation of payments to the different tranches.

However, what really caused the market to substatially undervalue these assets was that the default rate and the severity of the loss was much greater than even the complex rating processes had estimated them to be.  Have you noticed that some (many?) of the AAA tranches had losses?

The Solution:
The answer is an econometric assessment of the default and loss distributions of CMBS assets. Note, not a single point value but the whole distribution. This distribution will provide the CMBS Deal’s 95% or 98% VaR and CVaR loss estimates.

From this distribution we can then recalculate the loss estimates for each tranche, and be pretty certain what the loss characteristics of these tranches are independently of the ratings assigned to the tranches. A backup second opinion, if you would.

Why would this provide better answers?
1. We know the assigned ratings did not cut it.
2. We know the vintage or triangular matrix method provided incorrect default and loss curves. I was the first & only person to correctly identify that a major tool, vinatges/triangular matrix method, used in the mortgage idustry was providing incorrect results. The link provides access to an Excel worksheet that allows you to confirm this for yourself. To understand the magnitude of this finding one only need to look at the vast array of mortgage analyses, from the Esaki-Snyder reports to the Wachovia 2008 CMBS Loss Study, that use this tool.
3. The DSCR loss model used in the CMBS industry does not match the historical data. I discovered this weakness in the method through extensive testing against the historical data. Do you know of anyone who has personally tested these tool against the historical data?

What would the solution look like?

The graph below is a simulated long term VaR & CVaR loss outcome for a simulated CMBS deal.

CMBSPartnership

This graph is based on a set of CMBS loss & default distributions, and provides a second method to evaluating bond pricing. After all at the end of the day, isn’t cashflow analysis and DSCR’s about bond pricing? Notice how VaR (green dash) consistently underestimates CVaR (purple dash). We can add in Black Swans into this report. Note that my initial assessement was that CMBS Black Swans are on the order of 20% to 80%.

Partnership:
I am seeking partnerships with banks/investment funds/ratings companies to fund the development of this econometric CMBS business, which I expect to be transferrable to the RMBS sub-industry.

Call me 303-618-2800 or email me at benjamin . t . solomon AT QuantumRisk . com if you are interested in this business.

___________________________________________________________________
Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position.
___________________________________________________________________

Giacomo John Roma pointed me to an interesting article by Taleb & Spitznagel as part of the LinkedIn discussion and here are my thoughts on this and the related discussion.

I infer that Taleb’s primary concern is leverage. Debt provides leverage, and unconstrained debt provides unconstrained leverage. That is the lesson of 2008. Taleb wants us to reduce the extent of the leverage in our economy, because debt has a property that affects our perception of risk, i.e. debt hides volatility.

Regarding VaR:

(1) If you examine the historical data, outside of social sciences and quality control, there are very few thing that are Normally distributed. Taleb’s point here is that much of the mathematics that is used in finance and economics is based strongly on the assumptions of normality, and therefore, are not good forecasters of their metrics. They only appear to work when the law of large numbers can be applied. Therefore Taleb is recommending that a lot of these mathematics and their resulting analytical tools are erroneous in the long run.

(2) There are serious problems with VaR even without the normality assumption. In my experiece (I never assume normality) with years of building econometric CMBS loss and default models I found that VaR would consistently underestimate losses in the first 3 to 5 years of the loan.

Black Swans are a real problem in mortgages. The historical data shows that they are real. Just look at 2008. My own estimates are that Black Swans in CMBS bonds are on the order of 20% to 80%.

I also found that the only realistic way to handle undisciplined leverage was to implement non-linear economic or risk capital controls. This comes back to Taleb’s point on non-linearity.

Non-linearity is not new. Credit card companies use non-linearity a lot. They impose 2x and even 3x or 30% rates on delinquent card holders. Try imposing that on institutions, and all hell breaks loose. This is the power of buyers and sellers at play, not mathematics and not finance.

___________________________________________________________________
Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position.
___________________________________________________________________

This series of blogs is derived form my discussions on the LinkedIn Quant Finance: What is the best approach to handling CMBS &/or RMBS Credit Risk analysis? discussion forum.

I agree with prepayments in RMBS space but I was looking for more than the usual accepted, standard knowledge. Sort of industry wisdom versus textbook stuff. Argyn had suggested other loss models and Russell had explained how migrating FICO scores affect RMBS portfolios. The discussion continues:

For example if Argyn had not mentioned bond pricing models that would not have jogged my memory about the negative correlations between RMBS and CMBS. This negative correlation is something anybody can test for themselves. I don’t have the data or the models so it is from memory. I would suggest testing residential losses against commercial losses by MSAs. There are some economic lag effects but never got to complete this study.

This negative correlation could be interpreted as consumer spending lags job loss or growth. Therefore, if the recession is long enough we can expect an increase in commercial property losses even as residential properties begin to recover. If the recession is short you won’t notice this lag because most consumers and property owners have some staying power. If the recession is too long then both are negatively affected.

OK let me step out of the ‘conventional space’ to make this discussion more interesting and hope that others would join in.

My thinking was once you have bought a deal, it is yours good, bad and ugly. So accounting for prepayments (the bad) in RMBS is a poor strategy. As a tactic you need to do it but as a strategy it is poor. And you are always stuck with the economy, and whatever it does.

My strategy for both RMBS & CMBS was to determine the CVaR (the ugly) that an investor was willing to tolerate within some investment horizon, say 5 years from the underwritten parameters.

To do this you need to know how a deal would perform over this investment horizon, from the underwritten parameters. That means prepayments are (1) of little value as they are driven by future changing economic fundamentals, and (2) they are management tools to deal with the bad.

The question I then had to answer was how do we minimize the ugly (CVaR) given only underwritten parameters?

Before I ran into the RMBS incomplete data problem and abandoned RMBS, I tested FICO scores. Could we use underwritten FICO scores to predict future losses in RMBS space? Once you have bought the deal you can watch credit scores deteriorate, but we want to avoid that as much as we can.

One of the things I did was to analyze the distribution of FICO scores in the ‘universe’ against the distribution in the defaulted assets. And to my surprise there was no difference. Underwritten FICO scores are unable to predict future deterioration of individual credit worthiness. That is one of the main reasons we abandoned RMBS because with incomplete data everything hinges on FICO scores. Again you can test this yourself.

I believe that from a forecasting perspective, FICO scores are a proxy for either disposal or discretionary income, that the two are highly correlated – I haven’t done this study, it is a guess having looked at tons of data.

In CMBS one can build a model to forecast future losses at a deal or portfolio level, and then determine from the CVaR-tolerance and investment-horizon, which portfolio you would invest in.

Anyone else seen these types of problems with FICOs?

Anybody tried the investment strategy I’ve outlined above?

 

Discalimer: This blog is purely for informational/educational purposes and is not intended to either negate or advocate any product, service, political position or persons.
Creative Commons License
QuantumRisk Blog Posts by Benjamin T Solomon is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License.
Based on a work at quantumrisk.wordpress.com.

This series of blogs is derived form my discussions on the LinkedIn Quant Finance: What is the best approach to handling CMBS &/or RMBS Credit Risk analysis? discussion forum.

I’ve been building VaR and CVaR models for commercial property debt portfolios / deals this last 5 years.

I use a 2-layered model. The first layer (input) consists of a forecasting model based on historical data to project future losses, the second (output) layer is a Monte Carlo model.

This is my take on CMBS Loss/Credit Risk having looked at 220,000 CMBS assets (almost all the US commercial property assets since the early 1990s) for a previous employer. I can say this from experience:
1. VaR does both, it underestimates and overestimates losses.It especially underestimates losses in the 1st 2-3 years of a commercial property (bond)deal.
2. CVaR consistently gives stable estimates.
3. Much of the VaR variations are due to sampling plan and amount/quality of data you have.
4. The quality of the forecasting model is ever so critical.
5. I can say that as a general rule if you have to use tools like ARCH & GARCH you have factors missing in your data sample. But then again you may not have a choice if the data is not collected.

I have included Nassim Nicholas Taleb’s Black Swan to my set of metrics (I’ve handled this in a proprietary manner so I can’t discuss this just yet). And can tell you that there are good CMBS portfolios (Black Swan <= 20%) and some really bad ones. By bad I mean 80% wipe out.

Some other observations,
1. Residential mortgage losses are Lognormal but the catch is that much of this data is incomplete. And this problem is quite severe.
2. Commercial property losses are Gamma distributions and the incomplete data problem is not as severe.
3. I haven’t found any Normal distributions in real loss data.

My opinion to another discussion thread Is VaR a good and reliable method to measure Market Risk? would be a resounding ‘NO’ and definitely not for Credit Risk, but the industry wants to see it so I produce it.

I figure RMBS is similar but stopped working on RMBS when I realized that incomplete data was a severe problem.

I would be interested in other people’s experience in this area. I know that the DSCR based loss model is quite popular. Are there any other types of models in general use for credit risk?

 

Discalimer: This blog is purely for informational/educational purposes and is not intended to either negate or advocate any product, service, political position or persons.


Creative Commons License
QuantumRisk Blog Posts by Benjamin T Solomon is licensed under a Creative Commons Attribution-Noncommercial 3.0 United States License.
Based on a work at quantumrisk.wordpress.com.

Strategy Analyses:
World class strategy analysis based on propreitary business strategy models that recognize 80 million variations of a company’s business environment. Provides a systematic method of determining one’s competitors future courses of action from public data. (Success Story: Westport)

Supply Chain / Just in Time / Kanban / Process Streamlining:
Analyses factory floor layout and product-machine relationships to determine push/pull process and optimum scheduling methods. Yes, push and pull are equivalent. The real solution is to determine which best suites your operations and industry. (Success Story: Texas InstrumentsUnilever)

VaR (Value at Risk) & CVaR (Conditional Value at Risk) Analyses:
Finding order in large noisy datasets. Determine VaR,  CVaR and the other 1% for time series, credit risk models, economic capital modeling, and commercial property losses. Yes, Solomon has developed commercial property loss models that are far more sophisticated than anything in the industry, including Wachovia’s published models. (Success Story: Capmark)

Valuation Models:
Provides Monte Carlo based equity valuation. Upper and lower limits of company valuations. Advice on what business parameters to look out for.

About the Services Provided:
QuantumRisk specializes in using statistical and numerical modeling to deliver client solutions. It is not uncommon for us to work with 220,000 or 330,000 or even 730,000 data sets/records to solve business/economic model formulation problems.

The extensions of this knowhow are the inhouse proprietary business process and framework models derived from the results of these modeling techniques, to provide strategy and business process solutions.

QuantumRisk does not provide financial advisory, tax, audit, financial engineering, legal or HR consulting services. Neither does it provide engineering or materials design consulting.

QuantumRisk does IT implementation of solutions provided per our client requirements.

Benjamin T Solomon
QuantumRisk LLC

___________________________________________________________________
Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position.
___________________________________________________________________