Wilmott books

So I am not doing this to be retarded. I am also ignoring your height example because somebody getting taller is not really and independent event from one height to the next is it. I know you said it was a bad example. :)

If you are at +10, the probability of going up is the same as going down> To use your quote "the statistical phenomenon stating that the greater the deviation of a random variate from its mean, the greater the probability that the next measured variate will deviate less far."This quote is saying that if you are far above 0, then there should be a higher probability of it moving down than up. But you know it's 50-50 here, so it's not mean-reverting according to your quote.

When you quoted the quote, you missed this bit: "Although this phenomenon appears to violate the definition of independent events, it simply reflects the fact that the probability function P(x) of any random variable x, by definition, is nonnegative over every interval and integrates to one over the interval . Thus, as you move away from the mean, the proportion of the distribution that lies closer to the mean than you do increases continuously"

The thing that I am trying to (badly) get across is the tennet of the Central limit theorem that holds this together.

Central limit theorem in Statistics

This is not about the summation of many possible paths and their mean. It is about how a particular path exhibits reversion to the mean behaviour because that path has a probability distribution function which favours pushing extreme prices back towards the mean.

Say there is a starting price P0 at time T

Then at time T+1 price moves to P1.

The only thing we know is that P1 fell within a probability distribution function for P0 which has the highest probability of P1 being = P0 and then decreasing probability the further away P1 is from P0 according to a normal distribution.

Then at time T+2 price moves to P2.


The only thing we know is that P2 fell within a probability distribution function for P1 which has the highest probability of P2 being = P1 and then decreasing probability the further away P2 is from P1 according to a normal distribution.

This process continues ad infinitum.

The central limit theorem states that the sum of a large number of independent observations will generally be normally distributed. It is NOT about the probability distribution of the individual event. The implication of this for a single path is that there is a low probability of the observations significantly deviating from the mean and an increasingly, normally distributed probability of the observations reverting to the mean.

So for coin tossing, over an increasing sample size, that path will tend to revert to the mean of 0. This is the model also used to determine the random walk of price for an individual security. If the price started off at 10 and there is no trend, it will mean revert around 10, naturally, on it's single path, due to the central limit theorem.

Anyway, don't take it from me. Wilmott book pages 105 & 106 cover the way random walk is modelled and 115 & 116 describe the nature of the central limit theorem.
 
Last edited:
My understanding of mean reversion in the market is that wherever price goes, it will revert back to the price participants agree to transact around.

So let's say players like the x.xx50 level. Price goes up to x.xx60, then gradually goes back down to x.xx50 because that's the price around which they are in agreement to transact. Price might then go down to x.xx40. You now have a trading range between the 60 and 40, with mean reversion in between around the 50.

In currencies, I've generally noticed this event around lunch hours where price just weaves around a central number a couple times. Then suddenly it hits the 50 again and continues the previous trend's direction.

I haven't paid any attention to mean reversion in higher time frames than the 5m, so I've no idea how that works. That's been my limited experience, so take with a grain of salt.

-----

Anyway, that said I think I agree with Shakone in that there needs a catalyst for mean reversion to take place. Players agree on 50, so price reverts to 50. Coins only agree that they should land either heads or tails. So it seems to me that a coin will have mean reversion in the short term, but should a large deviation occur of 100+ heads or tails in a row, then there's nothing to say that coin "price" will head back to 0. It will, however, mean revert around the new "price" until another large deviation occurs.

... Which actually sounds very similar to how price moves IRL. Hmm... :innocent:
 
A random walk is an I(1) process, which means that as the sample size tends to infinity, so does the variance. In other words, a random walk can end up anywhere, and certainly is NOT mean reverting. Nearly all financial prices are I(1).

To obtain a stationary process, which means that the mean and the covariance is constant at all times, it is necessary to take first differences - that is what the '1' refers to. It is the amount of times a process has to be differenced before the resulting observations are independent and identically distributed.

The first difference is (today's price - yesterday's price), which is the return. Most returns tend to be stationary, but sometimes it's necessary to difference the returns as well - in which case we have an I(2) process. I(2) processes exist for growth variables in macroeconomics.
 
Mean reversion, meanwhile, is best thought of in terms of cointegrated variables - a type of dependence like correlation. You might expect two airline stocks to be cointegrated, which means that there is a natural tendecy for the spread between their prices to "error-correct" to zero.
 
How does central limit theorem apply in this Joey? This is what's confusing me.

e2a - amongst many things, obviously.
 
How does central limit theorem apply in this Joey? This is what's confusing me.

I think it's probably referring to the error term in the following:

ln P(t) - ln P(t-1) = e(t)

The difference in the natural logs from one day to the next gets closer and closer to the bell-shaped normal distribution as the sample size gets larger. As long as the individual e(t) are i.i.d (not necessarily themselves normal) then the left-hand side becomes normally distributed.

The difference in logs is a smoother version of percentage return, so it's similar to the first-differenced price series I mentioned earlier.
 
Jesus. The penny has now dropped. Thanks Shake and Joey.

OBVIOUSLY random walk process isn't mean reverting!!! You'd have to be a right tool to think so.
 
Awww, I thought I'd really got this nailed in layman's terms and it's already been explained! Screw it, I'm putting it anyway for anyone else.

Back to the coin example, if you start flipping a coin, and you've seen 10 more heads than tails, you're expecting, that in an infinite amount of time to expect 10 more tails than their are heads, now I come along and start observing, I'm starting my count now, so I expect there to be the same number of heads and tails, we can't both be right.
 
So 10 consecutive tosses would be 0.5^10 would be 0.00097 ( a little less than 0.1% probability of occurrence).

the decimal probability of 10 consecutive tosses is actually 0.5^9 :LOL:

but it would be really annoying and pedantic to point that out. luckily I am sharper than a Gilette Mach3. sometimes my genius surprises me. lol.
 
I think it's probably referring to the error term in the following:

ln P(t) - ln P(t-1) = e(t)

The difference in the natural logs from one day to the next gets closer and closer to the bell-shaped normal distribution as the sample size gets larger. As long as the individual e(t) are i.i.d (not necessarily themselves normal) then the left-hand side becomes normally distributed.

The difference in logs is a smoother version of percentage return, so it's similar to the first-differenced price series I mentioned earlier.

Okay this is where I start getting confused because this was my initial thought though with regards to % return rather than the ln... I've never actually done any rigorous analysis to check whether fat tails render it completely redundant but if I've understood this correctly, doesn't any skewness (which I'm assuming occurs quite often given the few analyses I've done myself) in the distribution of your left side indicate a bias and if so how can any sample with a bias be considered the outcome of a random process?
 
Jesus. The penny has now dropped. Thanks Shake and Joey.

OBVIOUSLY random walk process isn't mean reverting!!! You'd have to be a right tool to think so.

lol! As far as the central limit goes in relation to the example given, it says that if we take a sequence of coin tosses, and sum them up (this would give us the path mentioned), and then divide them by the number of the coin tosses (this would now not be the random walk path), then we converge to a process that is normally distributed, mean 0.
 
Because it's an auction and perfect auctions demonstrate normally distributed price. After a low, there are no more sellers interested in selling at that price and buyers force price back up through laws of supply/demand.

Why would a perfect auction demonstrate normally distributed price, exactly?

Does the desire for the item on auction have no bearing on things?
 
Why would a perfect auction demonstrate normally distributed price, exactly?

Does the desire for the item on auction have no bearing on things?

You really think I could give you a sensible answer given the fact it took 4 pages for me to realise that Random Walk cannot be a Mean Reverting process?

Seriously though, the desire for the item at different prices gives it the normally distributed shape. At the extreme high, some people will have been prepared to buy at that price but not many and sellers would have come in and driven price down. At the extreme low, other people would have been prepared to sell at that price but not many and buyers would have come in and driven the price up. In between the distribution shapes out to be normal. That's as much as I can tell you.
 
the decimal probability of 10 consecutive tosses is actually 0.5^9 :LOL:

but it would be really annoying and pedantic to point that out. luckily I am sharper than a Gilette Mach3. sometimes my genius surprises me. lol.

Why 9?

Doesn't the series start at 1 and finish at n, with n being 10? i.e. 0.5^n

Toss 1 = 0.5^1 = 0.5
Toss 2 = 0.5^2 = 0.25
Toss 3 = 0.5^3 = 0.125

Toss n = 0.5^n
 
Okay this is where I start getting confused because this was my initial thought though with regards to % return rather than the ln... I've never actually done any rigorous analysis to check whether fat tails render it completely redundant but if I've understood this correctly, doesn't any skewness (which I'm assuming occurs quite often given the few analyses I've done myself) in the distribution of your left side indicate a bias and if so how can any sample with a bias be considered the outcome of a random process?

The natural log helps to deal with the fat tails (kurtosis). With regard to a random process, the shape of the distribution, including skew (as in incomes) does not determine whether the `random' assumption is valid. All you need is that the sum of the probabilities adds to 1 (or integrates to 1 for continuous distributions). A biased coin is still a random process.


Also, I think a good example of the central limit theorem is as follows:

1) Take a draw from a uniform distribution on [-1,1]. Record the result. (A uniform distribution just assigns an equal probability to any value in the range - e.g. rand() in Excel).

2) The average of all the results from 1) becomes closer and closer to 0 with the number of draws, with the shape of the distribution resembling a bell curve. The dispersion around 0 decreases with the sample size (the curve gets tighter) - take the standard deviation of all the results and divide by the square-root of the sample size to get the standard deviation of the mean.

3) Any independent and identically distributed process, such as coin tossing and dice throwing will have this property in the mean.

The key area where all this fails in equities, for example, is that volatility is not constant as assumed in the Black-Scholes model. High volatility follows high volatility and vice versa, with an asymmetry with regard to crashes and booms. The i.i.d assumption is invalidated.
 
Sorry I didn't mean to say random process I meant random walk. I struggle with words sometimes. Something to do with the drift nonsense I suppose but the scope of my knowledge falls short of a reason why the direction of drift would change.


High volatility follows high volatility and vice versa, with an asymmetry with regard to crashes and booms. The i.i.d assumption is invalidated.

This articulates my thoughts on the matter I think.
 
Last edited:
I would add that systems don't have to be mean reverting or trend following, the simplest example being pairs trading. infamously, the claim is that rentech's systems aren't either mean reverting or trend following (for example, they released one trade they found but don't do as it doesn't meet costs where you buy market on a clear day). either way, none of this stuff is relevant to 'discretionary' trading and you can't do this stuff without TBs of data. a lot of it is basically about pricing very illiquid options/structured products or calculating risk, it isn't about which direction to bet on cable.
 
Well yes but Wilmott is aboots the risk innit. He doesn't generate trades does he? I thought his funds all blow up...
 
This is probably a gross oversimplification but the exercise seems to distill down to:

a) Looking for risk free money opportunities

b) An arms race of out-modelling one another which I think sums back to (a) anyway.

Directional doesn't come into it at all.
 
Top