Sunday, September 20, 2009

Risk models - some math

In the previous post I wrote about how the failure of mathematical risk models contributed to the financial crisis of 2008. That post did not include any math. This post is about some of the math.

Modern financial theory purports to be able to calculate probabilities associated with risk with a high degree of precision. There is much more to modern financial theory than I understand, but it is primarily built on the foundation of the normal distribution, also called the bell curve:

The normal distribution is a wonderful piece of mathematics. If you know only two numbers — the mean and the standard deviation — you can calculate all kinds of probabilities with precision.

You may have encountered the normal distribution in high school or college. Even if you didn't study it formally, you may have heard about grading tests "on a curve." This is the curve! The normal distribution describes many phenomena in our world, including the distribution of test scores. For example, the Scholastic Aptitude Test (SAT) is graded according to the normal distribution. Scores on the SAT are calibrated to have a mean of 500 and a standard deviation of 100. There is only a 2.3% probability of scoring 700 or higher (two standard deviations above the mean). That is the kind of calculation that is possible with the normal distribution.

The basic insight of modern financial theory is that changes in the price of an asset (i.e., the volatility of that price) can be modeled using the normal distribution. There will be many small changes clustered around the mean. There will be fewer large changes, far from the mean.

This works a lot of the time. The problem is that it doesn't work all the time. Below is an example of when it doesn't work. This graph shows monthly changes in the S&P 500 stock index (source p. 147):

The changes far from the mean (what are called the "tails" of the distribution) do NOT decrease to virtually nothing as in the normal distribution; they actually increase! A mathematical risk model based on the normal distribution will significantly underestimate the amount of risk in the world, especially in times of high volatility—i.e., in the tails.

So why do financial institutions use mathematical risk models that are ultimately based on the normal distribution? Two reasons. First, it does work well a lot of the time. And second, we don't have any better tools in our mathematical toolbox. There simply isn't any mathematics available that adequately describes risk in times of high volatility, like last fall.

For a list of references with more information, see this post.