Skip navigation

Category Archives: Techniques & Methodologies

On a monthly basis QuantumRisk analyses more than 102,000 commercial properties with a total original appraised value of $1.5 trillion, backing more than 64,000 loans, with an outstanding debt of $680 billion, to report default rates, loss severity before recovery, loan to value ratio (LTV), debt service coverage ratio (DSCR), occupancy rates, cap rates & change in property appraisal value for more than 400 U.S. markets, by property type, by city, by MSA by state.

There are 5 types of CMBS Property Risk Analytics* reports:

1. CMBS Deals
2. CMBS Warehouse/Portfolios
3. CMPB Property Risk by City by State
4. CMBS Property Risk for a specific City
5. CMBS Property Risk for a specific MSA

There are 24 sample example reports of rigorous evaluations of the downside risk a CMBS Deal, Warehouse or Portfolio may be subject to given the current, this past month’s, economic conditions. These reports may be used to assess the potential upside risk. To keep your costs to a minimum QuantumRisk provides one-off reports for a specific set of deal / warehouse / portfolio requirements.

The reports are ideal for deal/bond restructuring, associating default risk to the bond stack, negotiating portfolio pricing based on today’s loss characteristics, and sub-optimal investment avoidance i.e. the portfolio may look great on paper but without an evaluation of the loss characteristics one may not be fully informed of the downside risks.

To purchase any of these reports, please conatct Ben Solomon.

*Property Risk Analytics is the registered trademark of QuantumRisk LLC.

___________________________________________________________________

Disclosure

Chief Executive Officer Mohamed El-Erian (Jonathan Alcorn/Bloomberg)

PIMCO Chief Executive Officer Mohammed El-Erian

Many economist (PIMCO, JPMorgan Chase) are now reporting that the economy will grow by at least 3% in 4Q 2011. In this month’s blog we take a look at historical quarterly GDP growth to determine whether this is realistic and what we could learn from all this.

We will use politically neutral time series analysis and transitions matrices to infer what the economy is capable of achieving all by itself, and that would form the base line of further inferences.

This analysis would suggest that there is no ‘new normal’.

Reporting GDP Growth

Real Gross Domestic Product, 3 Decimal (GDPC96)

Craig Brown in Seeking Alpha cautioned how one calculates & interprets GDP growth. For example, in the US annualized quarterly GDP is multiplied by 4 (4xQ GDP). This makes the annualized GDP much more volatile. While in UK it is the last 12-months.

The US Annualized GDP (4xQ) is more volatile than the 12-month Annual GDP. For example, starting with a 2Q 2009 GDP at 12,860.8, and 4 quarters of GDP growth of 1.23%, 0.92%, 0.43% & 0.63%, the Annualized GDP growth is 2.5% (=4×0.63%) while the Annual GDP growth across 4 quarters is 1.98%. Similarly, starting with a 4Q 2007 GDP of 13339.2, and quarterly growths of 0.15%, -1.01%, -1.74% & -1.24%, the Annualized GDP growth is -4.96% while the Annual GDP growth is -3.80%.

I prefer to use actual measurements, therefore my statistics will be Quarterly real GDP growth as reported by the Fed’s GDPC96, and Annual real GDP based on the last 4 quarters of GDPC96.

Borrowing some ideas from manufacturing process control, it is usually unadvisable to use a volatile metric to manage a process as it leads to the problem of over control. That is, more intervention than necessary leads to an unstable process that becomes even more difficult to control. The contemporary equivalent in economics is, are we in control i.e. expanding the economy or are over controlling, i.e. we printing money. This is not a question we can answer anytime soon. Only time will tell.

Random Walk GDP Growth
Probability Distribution of Quarterly GDP change (QuantumRisk LLC)

Probability Distribution of Quarterly GDP change (QuantumRisk LLC)

Lets look first at GDP as a time series whose change in Quarterly GDP is a random normal process. See Figure.

Analyzing GDPC96 shows that the GDP growth can be modeled by the Normal distribution N(0.88%, 0.80%).

Using this information to generate a Monte Carlo random walk to forecast GDP over the next 6 Quarters starting from 2Q 2010, provides a range of 4Q 2011 Annual GDP of between -1.24% and 9.41%, with a mean Annual GDP of  3.48% and standard deviation of 1.62%.

This is a surprise as a forecast Annual GDP of 3.48% suggest the possibility that both the PIMCO and the JPMorgan Chase model outcomes are not different from the random walk model.

Further, the range of outcomes, -1.24% to 9.41%, shows that there is no such thing as ‘new normal’. The concept of ‘new normal’ is derived from the concept of ‘regime change’, that the economy has substantially changed to a new level i.e. economic regime has changed, and therefore, the econometric models either need to be reworked or replaced. It is less about what the economy is doing and more about how existing models are able or not, to track the economy.

Transition Matrix GDP Growth

Quarterly GDP Transition Matrix (QuantumRisk LLC)
Quarterly GDP
Transition Matrix (QuantumRisk LLC)

The GDP Transition Matrix (above) was constructed from the GDPC96 time series. Starting at a 0.45% GDP growth (closest to 2Q 2010 of 0.43%), the transition matrix shows that the Quarterly GDP growth after 6 Quarters is most likely to be between 1.35% to 1.80% (5.4% to 7.20% Annualized). See Table below (not the complete table of results).

GDP Growth 0.00% 0.45%   0.90%  1.35%   1.80%   2.25% 2.70% 3.15% 3.60%
Probability    3.90%  6.07% 12.15% 21.68% 16.76% 12.05% 6.25% 5.51% 3.33%
of Growth

This is another surprise. The transition matrix shows that expected Quarterly GDP growth in 4Q 2011 is 1.43%. This is 60% greater than PIMCO’s forecast. The possibility of a Quarterly GDP growth of at least 0.9% (3.6% Annualized) is greater than 78%. That this method of estimating GDP growth is even more optimistic that PIMCO’s or JPMorgan Chase’s forecasts. Note, however that there is the 22% probability that this will not be realized.

Summary
Hyundai Motor Manufacturing (Mark Elias/Bloomberg)

Recovery Accelerating

What does all this mean? The forecasting techniques used in this analyses are politically neutral as they are based strictly on historical data. This would imply:

1. Per Bullard’s comments, that it is now much more likely that QE2 will be substantially reduced in 2011.

2. What is cause for concern is the possibility that modern econometric models are little different from random walk.

3. Most importantly, if my 4Q 2011 forecast turns out to be correct (we will know in 1Q 2012) it would suggest that the economy recovers at its own rate irrespective of what our elected officials attempt to do or don’t do.

4. Similarly, if the so called ‘massive’ intervention was of a sufficient amount, it should have produce a GDP growth that should be significantly greater than that of a random walk. This does not appear to be the case suggesting that the ‘massive’ intervention was not massive enough.

5. Arguably one could ask, should the government have intervened to save GM, AIG and the banking industry? Should not this intervention be the domain of the shareholders? And let market forces takeover. Yes, one could point to the potential phenomenal human costs had the government not intervened; but even with intervention we have an extended unemployment of about 10%. Therefore, weakening the case for human costs of unemployment.

Merry Christmas.

___________________________________________________________________

Disclosure

It is critical for investors & real estate professionals to know which cities to invest in and which to stay away from for the time being.

We are very pleased to announce that CoStar’s Watch List featured some of our May 2010 Analytics in their article, “Impact of CRE Distress Varies Widely Market to Market” receiving more than 10,000 reads within 24 hours. A sample report for All Properties is available at http://www.QuantumRisk.com/.  

Our July 2010 CMBS Property Risk Analytics** (CPRA) shows that CMBS defaults & losses vary across the US by city from 0.0% to 80.0% defaults & 0.0% to 78.0% loss severities. Defaults rates continue to increase but loss severities continue to decline. How?

July 2010 CMBS Default Rates

The July CMBS Property Risk Analytics shows that the CMBS default rates continue to increase, and is at 5.79%. Note the graph is a snap shot of the CMBS pipeline as of the end of July 2010.

July 2010 CMBS Severity of Loss 

The July CMBS Property Risk Analytics shows that the CMBS severity of loss (before recovery) continues to decline and is now at 5.51%. Note, the severity of loss numbers do not include loss due to appraised value reductions. Note the graph is a snap shot of the CMBS pipeline as of the end of July 2010.

FDIC’s Mixed Report on Banks
FDIC’s list of “problem banks” reached 829 in 2Q 2010, NY Times August 31 2010. Even so, bank earnings continue to rebound posting $21.6 billion industry profits. “Across nearly every category, troubled loans started falling for the first time in more than four years. The sole exception was commercial real estate loans, which continued to show increased weakness. Still, the nation’s 7,830 banks remain under pressure.”

 New York Times / Jonathan Ernst / Reuters

“Without question, the industry still faces challenges,” Sheila Bair said in a news statement. “But the banking sector is gaining strength. Earnings have grown, and most asset quality indicators are moving in the right direction.” The agency expects a “recovery, sluggish and slow”.

The FDIC is cautioning that even though the outlook is becoming positive it may not be positive enough for a strong recoveery. On the other hand Russell Abrams of Titan Capital Group LLC, is betting the market is underestimating the likelihood of a crash (Bloomberg August 30, 2010)

So whose outcome is more likely, the FDIC’s small positive or Abrams’ second market crash leading to a double dip recession?

Will This Recession Be A Double Dip?
Our CMBS Property Risk Analytics shows that defaults are increasing but loss severities are declining. Apparently contradictory behaviors when you take into account that defaults and loss severities are usually positively correlated.

What is happening in the economy is that up to about a year ago CMBS defaults were dominated by newer loans that were backed by over priced (compared to today’s) valuations. Therefore, the large severity of losses late in the pipeline. The more recent defaults are from much older loans. Therefore smaller severity of losses early in the pipeline.

This tells us two things. First, industry losses that were primarily driven by over priced valuations have been fully absorbed by the industry – good news. Second, the industry losses has transitioned to a second stage, insufficient revenue. That is the more established older loans are defaulting due to insufficient business revenue.

It is this second stage that worries me. Our CMBS Property Risk Analytics shows that at the national level City DSCRs – a proxy for business revenue – are at 1.366 (April), 1.367 (May), 1.376 (June) and 1.397 (July). About constant between April, May, June and a 2.3% increase in July.

Could the July 2.3% increase be a one off ‘bump’ in the reported data?

Looking at the national level City Occupancies, our CMBS Property Risk Analytics show that City Occupancies were at 88.22%, 88.51%, 90.16% & 89.33% respectively. That is in the last 4 months there has been a general upward trend in CMBS City Occupancies of 0.5% increase per month – also good news – and if sustainable will reflect a general economic environment that will avert a second market crash & double dip.

Therefore, in my opinion a double dip recession is unlikely and I disagree with Russell Abrams opinion that a second market crash is likely to occur. I concur with Sheila Bair that even though a recovery is in place, at this point in time, a recovery is not likely to be as fast as we would like it to be.

Disclaimer: There is a certain amount of opacity in any business. For example the collapse of Lehman Brothers took us all by surprise. Therefore, if for example a major bank were to collapse that would alter this expected outcome.

CMBS Property Risk Analytics Pricing & Promotion
For Single Users, the CMBS Property Risk Analytics monthly reports are priced as follows:

 Item Title Monthly Price
QR CPRA Retail $135.00
QR CPRA Office $135.00
QR CPRA MultiFamily $135.00
QR CPRA Hotels/Lodgings $135.00
QR CPRA All Properties $370.00

 

The prices shown do not include discounted annual price, sale tax for Colorado residents/companies or Multi User pricing. For more information on pricing visit our website http://www.QuantumRisk.com/.

The corresponding April, May, June & July reports will be provided free for all 12-month or annual subscriptions paid by September 10, 2010. For PayPal payment instructions, please contact Ben Solomon. Note, an email address is required for receipt of ftp user id, ftp password and decryption password for each monthly report.

A sample report is available at http://www.QuantumRisk.com/Subscriptions/QRCPRA/SampleReports/(00)CMBSPropertyRiskAnalytics(2010-04)01-AL-SampleReport.zip

How the CPRA Report is Generated?
Every month we analyze reported data on more than 85,000 properties backing more than 52,000 loans to identify default probability, loss severity before recovery, loan to value ratio (LTV), debt service coverage ratio (DSCR), occupancy rates & change in property appraisal value for more than 400 U.S. markets, by property type, by city, by SMSA/MSA by state across the US. Five property type reports are generated: All Properties, Lodgings/Hotels, MultiFamily, Office & Retail.


** Property Risk Analytics is the registered trademakr of QuantumRIsk LLC.

___________________________________________________________________ 

Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position. Nor is this blog post to be construed as investment advice. 

Contact: Ben Solomon, Managing Principal, QuantumRisk
___________________________________________________________________

Our latest CMBS product, QuantumRisk CMBS Property Risk Analytics (*1) (12 Mb Excel 2007 worksheet) will soon be available as an annual or monthly subscription or one-off purchase on the 15th of each month. 

QuantumRisk LLC invested more than $250,000 in research to develop the algorithms required to produce this product, CMBS Property Risk Analytics, on a monthly basis. For those of you who are familiar with the raw CMBS data know that this is no small feat.  

Robust,  valid and reliable market data is indispensable for actionable CMBS investment decisions and we are extremely proud to be the only one if not one of the very few companies providing this level of detail for CMBS defaults and losses. Thus providing more insightful commercial real estate business intelligence to our clients. 

On a monthly basis we analyze more than 85,000 properties backing more than 52,000 loans to report default probability, loss severity before recovery, loan to value ratio (LTV), debt service coverage ratio (DSCR), occupancy rates & change in property appraisal value for more than 400 U.S. markets, by property type, by city, by SMSA/MSA by state. Every month! 

The purpose is for sophisticated investors, investment bankers, underwriters and fund managers to know what is happening where it is happening when its happening. Even the muni bond professionals and local & state governments can use this report to figure out what is happening in their local market, as the commercial real estate market is reflective of the local business environment and therefore reflective of the local economy. 

We are also very pleased to announce that CoStar’s Watch List featured some of our May 2010 analytics in their newsletter article, Impact of CRE Distress Varies Widely Market to Market. This article received more than 10,000 reads within the first 24 hours. 

  

How Will Investors Benefit?
Because we analyze more than 85,000 CMBS properties & 52,000 loans on a monthly basis we have had to develop proprietary data algorithms to process this very large amount of data generated by the manual data entry origination-securitization-servicing business process. These CMBS Property Risk Analytics are organized into more than 420 tables for easy instant access of the data directly from your own Excel 2007 models. 

We report CMBS Property Risk Analytics by property type for cities, SMSA/MSA & states where there are 5 or more good property information in that property-geographic-statistic bucket, thereby further reducing the noise in the data. We don’t guarantee the end result is error free because we have no control over the origination-securitization-servicing business process but we do assure you that we have done our very best to give you the very best. 

Because the latest data is available on a monthly basis you get the most up to date information about what is happening across the United States in the commercial real estate world. 

Included in each monthly Excel 2007 report are 8 tutorials on how to use these analytics so that you the subscriber is benefiting from these reports within minutes of receiving them. 

  

What Questions Can Investors Answer?
1.Too much or too little capital?
Are you putting down too much capital with an LTV of 0.6, and want to know what Current (*2) LTVs are in Columbus, OH? 

Answer: You are most likely putting down too much capital as Current LTVs in Columbus OH are averaging 0.71. You could probably reduce your capital requirements by 11% by seeking other lenders. 

2. Realistic income generation?
Will the local or regional economy facilitate an income stream reflective of a DSCR of 1.2 in White Plains NY? 

Answer: Not likely as Current (*2) DSCRs in White Plains NY are averaging 1.03. A DSCR of 1.2 may be acceptable in a few years when the economy improves, but not today. If you were a muni bond professional or in local or state government the DSCRs would provide a quick & dirty indicator of whether you would need to raise taxes or not for general obligation bonds. 

3. Expected loan loss before recovery?
What is my expected loan loss in North Las Vegas, NV?   

Answer: The expected loan loss (before recovery) of North Las Vegas, NV, is 7.45% with a probability of default of 27.59% and severity of loss at 27.01%. As of May 2010 North Las Vegas is a high risk lending environment. Even though Las Vegas is high risk (2.40%, 15.79% & 15.23% respectively) it is less risky than North Las Vegas.  

4. City not found?
OK there is no data about Lewiston, ME, can I substitute with the SMSA Lewiston-Auburn, ME or the state level data? 

Answer: Yes. If a property count for a statistic is less than 5 (*3) we do not report this city, SMSA or state level statistic. 

5. Realistic occupancy?
In the past, CMBS cash flow models have generally assumed occupancies of about 99%. Is this a valid assumption especially since this Great Recession? So what would be a reasonable occupancy rate for Fort Worth TX? 

Answer: As of May 2010 the occupancy rate for Fort Worth TX is 85.97%. This rate will definitely increase as the local Fort Worth / Texas economy improves but at this time any occupancy rate much greater that 85.9% would be considered optimistic. The occupancy numbers presented in our CMBS Property Risk Analytics does not include completely vacant properties. 

6. An estimate of recent appraisal discounts?
What is the average reported property appraisal change (*4) in the state of Texas, over the last 15 months? 

Answer: As of May 2010, in the state of Texas the reported property appraisals are at 61.37% of appraisals done at origination. 

7. Comparative local economics?
Which city poses less commercial property risk? Pasadena CA or Beverly Hills CA? 

Answer:

State:City # Of Reported Properties in City Probability of Default Severity of Loss Expected Loss
CA:Beverly Hills 45 2.22% 2.22% 0.05%
CA:Pasadena 47 0.00% 0.00% 0.00%

With our CMBS Property Risk Analytics we can answer this question conclusively. It is Pasadena CA. 

Notes: 

(*1) “Property Risk Analytics” is the trademark of QuantumRisk LLC. 

(*2)  Current LTV and Current DSCR are calculated using the most recent appraisal values, outstanding balances, NCF DSCRs & NOI DSCRs. 

(*3) The number of properties used to determine a statistic (after processing) varies from 5 to several thousands depending on the size of the city/SMSA/state, property type and the type of statistic being reported.
(*4) Appraisal changes are not as well reported as LTVs or DSCRs. For example, there may 400 SMSA reported for defaults but only 35 for appraisal changes.

  

The Big Surprise: Multi-Property/Cross-Collateralized Loans  

Since we had all this processed data I thought I would check to see if single property loans were at a higher risk than multi-property loans, because from a loss perspective why would we put multiple properties into a single loan or even cross-collateralize a loan?  

I set up 2 pools of loans. The first pool consisted of 42,488 single property loans of all property types and the second of 2,177 multi-property & cross collateralized loans. The surprisingly results below show that multi-property & cross collateralized loans are at a higher risk of default than single property loans. So much for the assumed portfolio diversification effects. With respect to losses, for a better understanding of how portfolio diversification does or does not work see my blog post Loss Containment: Portfolios

 

Mortgage Pool  Mortgage Count  Total Mortgage Original Principal Balance ($1E6)  Mortgage Default Rate  Average Mortgage Severity of Loss without Recovery 
Multi-Property  2,177  18,356  7.49%  7.19% 
Single-Property  42,488  454,809  5.94%  5.65% 

 

Why Do We Recommend a Monthly Subscription?  

With a monthly subscription you can look at trends in the data and your decision making process is enhanced by the knowledge of the local trends. The table below the DSCRs of 3 SMSA/MSA present in the data, Denver-Boulder, CO, Atlanta,GA, and Dallas-Fort Worth, TX.   

      Denver-Boulder, CO     Atlanta,GA     Dallas-Fort Worth, TX 
   Reported Properties  Reported Current DSCRs  Reported Properties  Reported Current DSCRs  Reported Properties  Reported Current DSCRs 
2010/05  112  1.32  237  1.20  212  1.28 
2010/04  239  1.36  534  1.21  520  1.24 
% Change     -2.64%     -0.94%     3.53% 

  

In this example DSCRs, the ability to generate income to cover debt payments, is used as a proxy for business revenue and therefore local economic activity. Comparatively speaking Atlanta, GA has the worst reported DSCRs of the 3 SMSA/MSAs. We can see the different lags in the local economy even though the national economy is experienceing positive GDP growth. The Atlanta, GA, local economy is still contracting (-0.94%) but not as severely as the Denver-Boulder CO local economy (-2.64%). While the Dallas-Forth Worth, TX, local economy is expanding at 3.53%.  

These contractions and expansions will change from month to month, and a general trend will show where to or not to invest in the near term.

  

Summary 

These questions & answers presented above show that with QuantumRisk CMBS Property Risk Analytics there are many news ways to infer what is happenning in the local and state economies that can mitigate risk and reduce expenses. 

Further, we have shown how muni bond professionals, local & state governments can use this data to determine a quick & dirty assessment (and not a substitute for a thorough evaluation) of whether general obligation bonds can be issued without raising taxes and which part of a state needs further attention in terms of recession assistance or business policy matters. 

For a limited time we are making these QuantumRisk CMBS Property Risk Analytics available at a discounted price. Please contact me, Ben Solomon, for further information or to place orders.

___________________________________________________________________ 

Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position. Nor is this blog post to be construed as investment advice. 

Contact: Ben Solomon, Managing Principal, QuantumRisk
___________________________________________________________________

Some Thoughts on Default Methods
Summary: Asset defaults (ratio of events) are statistically different from dollar defaults (function of ratio of magnitudes).  

Multiple Distributions: I had originally thought I’d just discuss long tails, but found that some matters needed to be clarified before discussing long tails. Individual asset losses have fat and long tails; the result of default and loss severities that obey binomial, lognormal or gamma distributions.  

Default Methods: There are only 2 broad methods of determining default probabilities in the mortgage industry. The first default method is asset default Pa defined as the number of assets defaulted divided by the total number of assets in the portfolio. This is a statistic of proportion or ratio of events.  

The second is what I term dollar defaults Pd (a.k.a. structural models). A dollar default is said to have occurred when the ratio of the default boundary value to original value decreases below a specific value. I use term them dollar default because they are primarily driven by the ratio of magnitudes to estimate credit risk; severity of loss and 1 – severity of loss are examples. These are statistics of proportion or ratio of magnitude and we can term these ratios severity of loss type statistic

Industry usage: The 2 ways this is used in CMBS are: 

(1) CDR (Constant Default Rate): CDR is the ratio of outstanding balance at default (default boundary value) divided by original principal balance (original value). In CMBS deal structuring CDRs are presented as a time series of ratios a.k.a. loss vectors; severity of loss in its most basic form. There is no need to model defaults as they are assumed to have occurred (very neat!) and severity of loss is predetermined by the CDR statistic. 

(2) DSCR loss models: A default occurs when DSCR gets below 1.0. Property cash flows are reduced by 2% (or some suitable value) per annum until this default event occurs. The ratio of magnitude, the ratio of the outstanding principal balance (default boundary value) to the original principal balance (original value) is determined when DSCR drops below 1.0. To determine severity of loss, the default event is specified by a rate of deterioration of cash flows, which is itself a ratio of magnitudes

___________________________________________________________________ 

Empirical Data Confirms Biases
Summary: Empirical data confirms dollar default biases 

Empirical Confirmation: Empirical research (by others) confirm that dollar default methods (a.k.a. structural models) underestimate default probabilities. My own research on DSCR loss models concur with these results, that DSCR loss methods underestimate losses early in the life of a loan. My concern is not so much with these models’ expected values as with the shape of their tails. 

Test 1

Illustration:In a non-rigorous way we can illustrate why. Dollar defaults Pd, as a function of proportion of magnitude, have a different statistical behavior from asset defaults Pa, a proportion of events. We can see this by writing assets & dollar defaults, respectively, as some function of economic & industry factors f(x)

Asset defaults as a function of economic and industry factors:
Pa = f(x) = number of default events / total number of assets 

Dollar defaults as a function of economic, industry and asset size, s: 
Pd = g( f(x), s) = some function of ($ outstanding balance / $ original balance) 

   

Test 2

Using 2 portfolios to illustrate. Portfolio A consists of 2 assets of $100,000 each, and portfolio B consists of 3 assets of $100,000 each. Should one asset in each portfolio experience a loss (a good assumption if defaults are small) of $70,000, Portfolio A’s loss is 35% (70,000/200,000) & B’s is 23% (70,000/300,000). 

 

Different Statistics: However, Portfolio A’s asset default rate is 50% (1/2) and Portfolio B’s is 33% (1/3) but their respective severities are 35% & 23%, and being less could result in an underestimation of asset default probabilities. But wait. Should the loss have been $20,000 then Portfolio A’s & B’s losses are 1% and 7% respectively, but the asset default would still be 50% & 33% respectively. That is, for each asset default there are multiple severity of losses, and therefore, dollar and asset defaults have different underlying statistical behaviors.

The 2 figures (click on figures to enlarge) above, Test 1 & Test 2, show very different statistical distribitions that are dependent on the underlying nature of the risk drivers. It is clear from the graphs that the probability distributions of these severity of loss type statistics used to generate defaults, do not exhibit Binomial behaviors; Test 2 is not Lognormal, and Test 2’s tail is much fatter and longer than Test 1’s. 

Alternative Explanation: Researches currently believe that this consistent underestimation of default probabilities is due to missing factors such as liquidity and recovery. But including recovery will only reduce the severity of loss statistic and would further depress the dollar default estimations. My analysis, however, suggests an alternative explanation for the underestimation, that of different statistical properties. 

Undesirable Statistic: The dollar defaults statistical properties may even be undesirable. Using the form sum of (probability of default x outstanding balance at default) to estimate expected portfolio loss, we see that dollar defaults introduce asset size twice and asset defaults only once. Therefore, dollar default methodologies may not be desirable for determining default probability. 

Beta Distribution: An additional caution for those of you who model default & severity of loss. In my opinion, using the beta distribution is an assurance that your results are incorrect. Why? In my 30+ years working with large data sets, the beta distribution is the single most unstable distribution I have come across. This distribution will change shape when you are not looking! It is so unstable that small changes in its parameters can lead to significant changes in its shape. 

___________________________________________________________________ 

Reducing Impact of Loss Tails
Summary: Portfolios alter the shape of the tail for the better. 

Therefore, we drop the use of dollar default methods. Most of us use portfolio diversification to reduce risk as measured by standard deviation of returns. But portfolios have little known properties, they can reduce the effect & change the shape of long tails. 

Severity Reduction: A portfolio consists of many assets, and each asset will have default probabilities and loss severities associated with it. All other factors being equal, the impact of a portfolio’s tail loss can be reduce by increasing the number of assets in the portfolio. Using the 2 portfolios above to illustrate this; the severity of loss of Portfolio A is 35% but that of Portfolio B is 23%. The severity of loss to a portfolio is reduced by the size of the portfolio (given all other factors being equal). 

Shape Change: Taking this a step further CMBS loss severities tend to be Gamma distributions while portfolio losses ought to approach Normal distributions (but not quite). Therefore for the same mean & standard deviation, the Gamma’s tail can be 25 to 35 times longer than the Normal’s tail. Why not quite?. In lay man’s terms, the Central Limit Theorem justifies the approximation of large-sample statistics with the normal distribution, and therefore large portfolio statistics should look Normal. However, default probabilities tend to be small, in the 1 to 2% range. Therefore, there aren’t enough observations to substantially shrink the loss tail, and therefore, appear lognormal or at least skewed to the right. 

Multiple Properties: Likewise, having multiple properties (and beware of cross collaterized loans, they are usually synonymous with multi-property loans) under a single mortgage can lead to catastrophic failure if there are a few properties. The loan defaults if a single property’s loss of income causes the loan’s DSCR to drop below 1.0. In this case a multiple property mortgage magnifies the effect of a single default. To reduce this impact one needs to either reduce the number  of these assets (loans) in the deal or increase the number of properties in the mortgage. However, the latter is not a good solution as it defeats the purpose of deal structuring. 

Spatial Correlations: Another problem with multiple property mortgages is that these properties tend to be in the same MSA (Metropolitan Statistical Area), and therefore are at risk to spatial correlations (see for example Prof. Tom Thibodeau, CU Boulder) that properties in close proximity tend to rise and fall together. 

Multiple Liens: More obviously, the reverse is also true, multiple mortgages on a single property causes all loans to be in default should the property’s income fall. In this case the mortgages should be assigned to different portfolios, thereby reducing the severity of loss to a specific portfolio. 

Wrong Signals: Note that RBS has tried an approach to reduce underwriter’s risk by not closing the loans as they are pooled. Interesting. While it does not reduce investors’ risk, don’t you think this sends the wrong market signals?  

___________________________________________________________________ 

Some Lessons
Summary: Some lessons from a loss perspective. 

1. Avoid single-mortgage-multiple-property (& cross collateralized) assets (loans). 

2. Avoid CMBS deals with multiple cross collateralized assets as portfolio diversification may not be what it appears to be.  

3. Multiple-mortgages-single-property assets reduce risk for the same total principal.  

4. My experience with CMBS data suggests that CMBS deals should be in the 150+ asset range. The RBS $309.7 million, 81 property deal is small, and it should be interesting to see how a small deal at the bottom of the market fares in the future. 

5. Check your methodology. 

___________________________________________________________________ 

Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position. Nor is this blog post to be construed as investment advice. 

Contact: Ben Solomon, Managing Principal, QuantumRisk
___________________________________________________________________ 

 

QuantumRisk is starting a new CMBS service – providing econometric loss forecast for CMBS deals.

Why econometric CMBS loss forecasting? Our studies show that

1. The Triangular  Matrix Method is incorrrect (Excel 2007 example).

2. DSCR are not good predictors of defaults (see brochure).

3. Stress testing usually not done correctly (see post).

4. Black Swans can be used to differentiate between two deals with similar cashflows (see brochure).

Contact  Ben Solomon (benjamin.t.solomon@QuantumRisk.com) for more details. Company brochure here.

I recently had a discussion with a respected colleague of mine. It got me thinking about the many differing points of view in finance. My colleague accepts that regime change is a correct phenomenon in financial time series, and you will find many published articles in respected journals proposing how to deal with this regime change. I might also add the many quants accept regime change.

And here we differ. I do not accept regime change. I know from past experience building time series forecasting models that regime change is model misspecification. Let me show you 3 graphs that support my point of view. Clearly Fig 1 is a linear trend.

Figure 1: Time Series from Period 245 to Period 300

Figure 2: Time Series from Period 0 to Period 1000

However, when we expand the range of the time series to between 0 and 1,000 (Fig 2) the linear trend of Fig 1 now becomes a regime change from a level around 0 prior to period 200 to a new level of about 7 after period 350. This must be proof of regime change. No? But wait.

Figure 3: Another time series.

Figure 3 is another example of regime change. Prior to period 150 the level is around 0, between periods 170 and 270 the level is about 110, and between periods 290 and 400 the level is about 230. This would be considered an example of two regime changes.

Because I generated both data sets I can inform you that Figure 3 was generated by Normally distributed random noise and nothing more. Figures 1 & 2 were also generated by Normally distributed noise and with a trend inserted between periods 250 and 300.

First lesson. A continuous function time series can present itself as a regime change when it is not, but in real life we cannot ‘rerun’ the time series to test if regime change will recur.

Second lesson. Statistical tests will affirm that regime change did occur when no regime change was present.

Third lesson. Figure 1 is a subset of Figure 2. Therefore, our interpretation of the data depends on our perspective.

Fourth Lesson. Therefore, my experience with time series would suggest that regime change is model misspecification.

Fifth Lesson. Even though economics & finance borrows heavily from the scientific method it is still an art.

Take care,

Ben

___________________________________________________________________

Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position.

Contact: Ben Solomon, Managing Principal, QuantumRisk

___________________________________________________________________

I read Rishi K Narang’s Inside the Black Box, The Simple Truth About Quantitative Trading, as a person who has many, many years working with gigantic amounts of economic and financial data with the specific purpose of forecasting future outcomes.

This book is a must read for those who want an insight into the world of “high finance”. Well written, simple, clear and does not require an understanding of the mathematics of finance. He lays out the how and why of quant trading, in non-technical term, that allows all of us to appreciate that quants are here to stay. Financial services cannot do without quants, just as much as we cannot do without markets.

Valuable to both newbies and seasoned statistical modelers such as myself who have not been directly involved in trading. He points to sources of errors in financial strategies in both the quant and non-quant world. I particularly liked that he had pointed to the problems with correlations.

Narang provides an insight into the world of quants. He explains how and why quants implement strategies and the choices available to them. That at a high level there are a limited set of strategies – one can count on one hand – but at the detail level the number of choices explode.

Reading this book told me that Narang was not just another quant. He is a quant who is always asking the question, how can we build a good model with this data? As it is one thing to build a model but quite another to dig deep into the data to find out why it ticks.

___________________________________________________________________

Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position.
___________________________________________________________________

My earlier forecast agrees with a National Association for Business Economics survey of top business economists:

1. In my post 14.5 million Jobless & Counting I had shown that if the Government does not do anything radical from historical responses our unemployement rate will take until June 2014 to reach 6.1%. 87% of the business economist surveyed expect that unemployement will drop to 4.7% by 2012 or later. That is, the history of President Carter suggest that President Obama is looking to be a one-term president, too. I hope not but that is what it appears as of today October 12, 2009.

2. In my post Have We Hit the Housing Bottom? the business economist concur that home prices will experience a gain from 2010.

3. In my post Have We Hit the Housing Bottom? I forecasted that banks will have difficulties until 2012, and will thus be a drag on the economy. The business economist concur that this will be the case as they expect financial markets to return to normal sometime between 2011 & 2013. 

___________________________________________________________________
Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position.
___________________________________________________________________

Goldman’s Profits
Goldman Sachs 2Q 2009 profits of $3.44 billion made the news. Its trading activities was the primary earnings driver with wider profit margins on the buying and selling of securities while everyone else did not make as much. There are news reports where Goldman Sachs denies substantial profits from trading but in this post I analyze the available public data, and necessarily infer that Goldman Sachs increased profits by changing their algorithms.

First full disclosure. I have no connections with Goldman Sachs (GS), don’t know what specifically they do, how they do it or why they do it, and as of today I don’t know anyone at GS.

Lets lay down some facts.


Experts’ Knowledge
A. Use the Artificial Intelligence definition of experts. An expert is that 5% of the population that has 95% of the knowledge of a field, and that the other 95% of this population are non-experts as they have substantially less than 95% of this knowledge. Given that many people have passed through Goldman Sachs we can infer some possibilities:

A1. This trading knowledge (TK) has not left Goldman Sachs. The experts are still at Goldman Sachs and the people who left are knowledgeable but not experts. Or,

A2. This knowledge has left GS. That some of these TK experts at Goldman Sachs are now employed with other companies.

Outcome A1 provides us with little room to infer Goldman Sachs’ TK other than they are very good at holding on to their experts. Outcome A2 then raises some interesting possibilities.


Knowledge Dispersion
B. Second fact. We know from recent news reports that only Goldman Sachs made a ton of profits, and nobody else, or substantially so. Therefore we can infer that the experts who left Goldman Sachs were unable to reproduce Goldman Sachs’ successes, because:

B1. They could not reproduce GS’s knowledge capabilities.

B2. They could reproduce GS’s knowledge capabilities but given the recession were constrained from executing these trading strategies.

Outcome B2 is of little value to us. It would also suggest that the other banks having invested so much in technology, people & processes did not have the confidence in their own people. That would not make sense. So we are left with outcome B1. So what does Goldman Sachs have that nobody else seems to have?


Goldman Changed Their Algorithms
C. Third fact, from a statistical perspective there is only one way to make sustained profits, and this can be divided into 3 steps:

C1. To make a profit on a trade, say some distribution P(x,y)

C2. To make a loss on a trade, say some distribution L(a,b)

C3. To make sure you have more profits than losses in a series of trades or E(P) > E(L)

I must admit here that I am assuming that Goldman Sachs’ TK profitability is sustainable, i.e. that Goldman Sachs has achieved E(P) > E(L).

According to the reported data Goldman Sachs increased their VaR (Value at Risk an industry standard for measuring financial risk) from $240 million (1Q 2009) to $245 million (2Q 2009) or 2%. This is a marginal increase in risk recognition compared to the increase in profits from about $1.84 billion (1Q 2009) to $3.44 billion (2Q 2009) or 87%. But they increased their VaR by 33% from a year ago. The general consensus reported in the news is that Goldman Sachs substantially increased their trading risk.

Having worked real numbers with VaR and CVaR over many, many years I would put forward a different opinion. Given the Wall St. crash of 2008, Goldman Sachs substantially changed their VaR methodology to recognize the underestimation of their trading risk in prior years. This can be seen in their historical data. Between 3Q 2002 and 1Q 2008 VaR normalized for asset size ranged between 0.0122% (2Q 2005) and 0.0189% (2Q 2006). Between 2Q 2008 and 4Q 2008 VaR was increased from 0.0206% to 0.0253%. VaR was again increased to 0.0311% (1Q 2009) and 0.0299% (2Q 2009).

Compare the 2009 VaRs to the last time when the Dow was in the 8,000 to 9,500 range or 3Q 2002 to 4Q 2003. In 2002/2003 Goldman Sachs VaR was 0.0132% (4Q 2002) to 0.0175% (3Q 2003). However, in 2009 Goldman Sachs VaR was between 0.0299% and 0.0311%, or double that in 2003. See Figure 1.

GoldmanSachsHistoricalVaR

Therefore my experience working with VaR and CVaR suggest that Goldman Sachs changed their methodology in 3 stages (2Q 2008, 4Q 2008 & 1Q 2009) but did not alter the riskiness of their asset classes.


Industry Misconception: Probability is a Sufficient Criterion
D. So we can assume that Goldman Sachs’ TK has figured out how to ensure that E(P) > E(L). But if you buy into coherent measures of risk, R, and Black Swans, BS, as I do, you would also add some additional constraints to their trading strategies:

D1. First constraint, E(P) > E(L). That is, it is not sufficient that the probability of profit P(P) be greater than the probability of loss P(L) or P(P) > P(L) is an insufficient condition, because the shape of the return distribution’s tail can significantly alter outcomes. We should note here that on an industry-wide basis quants use this P(P) > P(L) as a sufficient criterion.

D2. Second constraint, the sum of profits S(P) generated from past profitable trades must be significantly greater than the sum of losses S(L) generated from past losing trades or S(P) > S(L). Therefore, E(P) > E(L) versus P(P) > P(L) is a subtle but significant finding.

D3. Third constraint, the 98% loss CVaR or R(L,98%) must be not substantially large or R(L,98%) << 100%.

D4. Fourth constraint, Goldman Sachs does not trade when Black Swans are substantially large or BS(L) >> 0.

The reader may ask what is the significance of D3 or D4? If a large extreme loss is realizable, then it only takes one such trade to eliminate past profits. D1 tells us that for a specific set of trades Goldman Sachs had figured out the statistical long run outcomes. D2 tells us that Goldman Sachs is keeping track of their trading history within their algorithms. And D3 & D4 tells us that they are selective in what they trade.

Many people have suggested that Goldman Sachs use super computers to effect latency differences, but I have heard this since the 70s. So assuming that Goldman Sachs is using super computers, I tend to discount that this is the real reason for super computers. I would think that Goldman Sachs uses super computers to evaluate D1, and some form of D3, on the fly in real time. 


The Ability to Make Money is not the same as Having Money to Make Money
I received a lot of comments from a lot of people. These comments can be summarized into 4 points, Cheap Funds, Insider Information/Conspiracy Theory, Organization, and Market Efficiency. Goldman Sachs did fail and had to be rescued, but the question remains why did they make a ton of profits that made headlines while others did not? Looking at each of the 4 suggestions here are my opinions:

E1. Organization: Goldman’s need for rescue shows that they weren’t organizationally better than any of the other banks.

E2. Market Efficiency: Under severe stress of 2008/2009 markets would not have been efficient, but that would not exclude Goldman Sachs losing money like the other banks did.

E3. Insider Information/Conspiracy Theory: First, that is a very big risk to take especially if you get caught. However, would not the other big banks have had the same ‘advantage’ just by virtue of their size? In my opinion this is foolishness and I am sure GS employees would agree with me. Second, this is an asymmetric problem. You hear of insiders getting caught because they made a good profit from their inside information, but not when the lost money. In general I believe that inside information is over rated

E4. Cheap Funds: To use the army term, cheap funds are a force multiplier. You have to have the ability to make profits before you can amplify those gains. That is why VCs are picky and still they don’t always succeed because they too don’t always get the ‘make’ part right.

Conclusion
My inference is that Goldman Sachs had a trading knowledge that enabled them to make those trading profits. This must have been fairly recent (2008 & 2009) for that knowledge not to have dispersed into the rest of the industry, and the historical data tends to agree with this timing. This mini case study illustrated two very important points, that it is possible to reduce business risk if you can get it right, and that there still are hidden misconceptions that need to be identified and resolved even in a sophisticated environment  like quant based trading.

___________________________________________________________________
Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position.
___________________________________________________________________