Skip navigation

Tag Archives: Trading Profits

Goldman’s Profits
Goldman Sachs 2Q 2009 profits of $3.44 billion made the news. Its trading activities was the primary earnings driver with wider profit margins on the buying and selling of securities while everyone else did not make as much. There are news reports where Goldman Sachs denies substantial profits from trading but in this post I analyze the available public data, and necessarily infer that Goldman Sachs increased profits by changing their algorithms.

First full disclosure. I have no connections with Goldman Sachs (GS), don’t know what specifically they do, how they do it or why they do it, and as of today I don’t know anyone at GS.

Lets lay down some facts.

Experts’ Knowledge
A. Use the Artificial Intelligence definition of experts. An expert is that 5% of the population that has 95% of the knowledge of a field, and that the other 95% of this population are non-experts as they have substantially less than 95% of this knowledge. Given that many people have passed through Goldman Sachs we can infer some possibilities:

A1. This trading knowledge (TK) has not left Goldman Sachs. The experts are still at Goldman Sachs and the people who left are knowledgeable but not experts. Or,

A2. This knowledge has left GS. That some of these TK experts at Goldman Sachs are now employed with other companies.

Outcome A1 provides us with little room to infer Goldman Sachs’ TK other than they are very good at holding on to their experts. Outcome A2 then raises some interesting possibilities.

Knowledge Dispersion
B. Second fact. We know from recent news reports that only Goldman Sachs made a ton of profits, and nobody else, or substantially so. Therefore we can infer that the experts who left Goldman Sachs were unable to reproduce Goldman Sachs’ successes, because:

B1. They could not reproduce GS’s knowledge capabilities.

B2. They could reproduce GS’s knowledge capabilities but given the recession were constrained from executing these trading strategies.

Outcome B2 is of little value to us. It would also suggest that the other banks having invested so much in technology, people & processes did not have the confidence in their own people. That would not make sense. So we are left with outcome B1. So what does Goldman Sachs have that nobody else seems to have?

Goldman Changed Their Algorithms
C. Third fact, from a statistical perspective there is only one way to make sustained profits, and this can be divided into 3 steps:

C1. To make a profit on a trade, say some distribution P(x,y)

C2. To make a loss on a trade, say some distribution L(a,b)

C3. To make sure you have more profits than losses in a series of trades or E(P) > E(L)

I must admit here that I am assuming that Goldman Sachs’ TK profitability is sustainable, i.e. that Goldman Sachs has achieved E(P) > E(L).

According to the reported data Goldman Sachs increased their VaR (Value at Risk an industry standard for measuring financial risk) from $240 million (1Q 2009) to $245 million (2Q 2009) or 2%. This is a marginal increase in risk recognition compared to the increase in profits from about $1.84 billion (1Q 2009) to $3.44 billion (2Q 2009) or 87%. But they increased their VaR by 33% from a year ago. The general consensus reported in the news is that Goldman Sachs substantially increased their trading risk.

Having worked real numbers with VaR and CVaR over many, many years I would put forward a different opinion. Given the Wall St. crash of 2008, Goldman Sachs substantially changed their VaR methodology to recognize the underestimation of their trading risk in prior years. This can be seen in their historical data. Between 3Q 2002 and 1Q 2008 VaR normalized for asset size ranged between 0.0122% (2Q 2005) and 0.0189% (2Q 2006). Between 2Q 2008 and 4Q 2008 VaR was increased from 0.0206% to 0.0253%. VaR was again increased to 0.0311% (1Q 2009) and 0.0299% (2Q 2009).

Compare the 2009 VaRs to the last time when the Dow was in the 8,000 to 9,500 range or 3Q 2002 to 4Q 2003. In 2002/2003 Goldman Sachs VaR was 0.0132% (4Q 2002) to 0.0175% (3Q 2003). However, in 2009 Goldman Sachs VaR was between 0.0299% and 0.0311%, or double that in 2003. See Figure 1.


Therefore my experience working with VaR and CVaR suggest that Goldman Sachs changed their methodology in 3 stages (2Q 2008, 4Q 2008 & 1Q 2009) but did not alter the riskiness of their asset classes.

Industry Misconception: Probability is a Sufficient Criterion
D. So we can assume that Goldman Sachs’ TK has figured out how to ensure that E(P) > E(L). But if you buy into coherent measures of risk, R, and Black Swans, BS, as I do, you would also add some additional constraints to their trading strategies:

D1. First constraint, E(P) > E(L). That is, it is not sufficient that the probability of profit P(P) be greater than the probability of loss P(L) or P(P) > P(L) is an insufficient condition, because the shape of the return distribution’s tail can significantly alter outcomes. We should note here that on an industry-wide basis quants use this P(P) > P(L) as a sufficient criterion.

D2. Second constraint, the sum of profits S(P) generated from past profitable trades must be significantly greater than the sum of losses S(L) generated from past losing trades or S(P) > S(L). Therefore, E(P) > E(L) versus P(P) > P(L) is a subtle but significant finding.

D3. Third constraint, the 98% loss CVaR or R(L,98%) must be not substantially large or R(L,98%) << 100%.

D4. Fourth constraint, Goldman Sachs does not trade when Black Swans are substantially large or BS(L) >> 0.

The reader may ask what is the significance of D3 or D4? If a large extreme loss is realizable, then it only takes one such trade to eliminate past profits. D1 tells us that for a specific set of trades Goldman Sachs had figured out the statistical long run outcomes. D2 tells us that Goldman Sachs is keeping track of their trading history within their algorithms. And D3 & D4 tells us that they are selective in what they trade.

Many people have suggested that Goldman Sachs use super computers to effect latency differences, but I have heard this since the 70s. So assuming that Goldman Sachs is using super computers, I tend to discount that this is the real reason for super computers. I would think that Goldman Sachs uses super computers to evaluate D1, and some form of D3, on the fly in real time. 

The Ability to Make Money is not the same as Having Money to Make Money
I received a lot of comments from a lot of people. These comments can be summarized into 4 points, Cheap Funds, Insider Information/Conspiracy Theory, Organization, and Market Efficiency. Goldman Sachs did fail and had to be rescued, but the question remains why did they make a ton of profits that made headlines while others did not? Looking at each of the 4 suggestions here are my opinions:

E1. Organization: Goldman’s need for rescue shows that they weren’t organizationally better than any of the other banks.

E2. Market Efficiency: Under severe stress of 2008/2009 markets would not have been efficient, but that would not exclude Goldman Sachs losing money like the other banks did.

E3. Insider Information/Conspiracy Theory: First, that is a very big risk to take especially if you get caught. However, would not the other big banks have had the same ‘advantage’ just by virtue of their size? In my opinion this is foolishness and I am sure GS employees would agree with me. Second, this is an asymmetric problem. You hear of insiders getting caught because they made a good profit from their inside information, but not when the lost money. In general I believe that inside information is over rated

E4. Cheap Funds: To use the army term, cheap funds are a force multiplier. You have to have the ability to make profits before you can amplify those gains. That is why VCs are picky and still they don’t always succeed because they too don’t always get the ‘make’ part right.

My inference is that Goldman Sachs had a trading knowledge that enabled them to make those trading profits. This must have been fairly recent (2008 & 2009) for that knowledge not to have dispersed into the rest of the industry, and the historical data tends to agree with this timing. This mini case study illustrated two very important points, that it is possible to reduce business risk if you can get it right, and that there still are hidden misconceptions that need to be identified and resolved even in a sophisticated environment  like quant based trading.

Disclosure: I’m a capitalist too, and my musings & opinions on this blog are for informational/educational purposes and part of my efforts to learn from the mistakes of other people. Hope you do, too. These musings are not to be taken as financial advise, and are based on data that is assumed to be correct. Therefore, my opinions are subject to change without notice. This blog is not intended to either negate or advocate any persons, entity, product, services or political position.