Deadline June

Re: Saviour optimisation robustness surfaces for stop & target

To put this the other way around, if the market produces a good profit for one parameter value but produces significantly less for a nearby parameter value, then that's not a stable parameter.

Good stuff (y)

You have come a long way...
 
Optimising Saviour001 - known unknowns

The usual questions about the rationale behind optimisation raise their ugly heads as usual.

I never wanted to optimise and tried to stay with a system that didn't need it, but I had to abandon that hope and do a full optimisation on the exit method.

Ideally I want a set-and-forget set of optimised parameters but I need to look at the potential pitfalls with that and the pros and cons of any alternative before I go further now.

From the equity curves of many systems that I built or played with, I saw that the credit crunch and the associated volatility often had a huge impact on the systems. For that reason I decided to include some of the credit crunch period - about half - in my optimisation window.

I plan to forward test the systems blind on the 2.5 years from 2008-07-01 to the present.

What I didn't foresee is that I would do a big double optimisation, running an optimisation on 20 values for 2 parameters - the stop and the target.

Why does that unsettle me? Because I worry that the optimal parameters won't hold up for the walk-forward test nor for live trading. I look at the profit vs param value graphs for the optimisation above and they don't look particularly robust. They look like they're worth taking the risk. But they don't fill me with confidence.

My prediction of results from the walk-forward is that the system will fail. But I have to do the process and get there first.

What I need is an equity curve from the optimisation period for all instruments combined, however that is a fair amount of work to put together so rather than possibly waste time on that, what I need to do is plug the optimal values into the system and run it on the walk-forward. If it's a loser, then I can forget about it and if it is convincing, then I'll do the equity curves and some closer examination of the stats and how the trades look.

So I'll get a coffee and do the walk-forward.
 
I don't know if what you mean by "walk-forward" is the same thing I mean by "out-of-sample". Probably yes. If not, here's what I have to preach today. You can do any optimization you want. But what you need to do is preserve roughly a 33% of unknown data (the last part) and, once you're done with all the optimizations and think your system is perfect, you have to test it on the out-of-sample, which is that 33% of data you left unknown. If it also works (more or less) on that unknown data, the system is most likely a good one. Otherwise it most likely is not. If it doesn't work, then you can't change it again, otherwise the out-of-sample would not be "out" any more, since you know what happens (that the system doesn't work).
 
So you only get one shot at running an out-of-sample test (walk-forward).

That makes sense and I did appreciate the fact already although sometimes it's easy to lose the overview of what the plan is, especially when changing the plan halfway through.

Thanks.
 
Re: Optimising Saviour001 - known unknowns

I still can't put my finger on it but what bothers me about optimisation is that it seems like a manual or discretionary intervention which I should be able to build into the system so that it becomes unnecessary.

I don't mean I want to build a system that re-optimises itself every bar, I mean I want a system where I know that the moving average length that I am using has some sort of relation to a physical characteristic of the market price.

That is not an obvious or easy thing to program into a system though. If I decided that the moving average length should be 5 * ATR, then the moving average length will change with every bar, so no trading platform that I've seen will directly implement it - I would have to keep an array of the moving average values from length = 1 to 100.

I'm just wasting time here. I'm not going to find any relationship between price characteristics and optimal parameter values any time soon now.
 
Last edited:
You've just reminded me that it's time to buy this product:
http://www.scientific-consultants.com/software.html

Nope, not yet. I remembered it cost 60 dollars rather than 160 dollars. I still can't buy it. Too expensive.

What you mentioned actually reminds me more of walk-forward optimization for which I almost bought this tradestation product once:
http://www.rinafinancial.com/Optilogix - removed.asp

But then my friend tried it and it was almost impossible to use (maybe we got it for free on emule actually - I don't remember any more).

However, as I said, Ts Evolve, also a product for tradestation (see first link), does something related to optimization. According to the book by Katz, you feed it all the parameters you know: moving averages, time of the day, range of the day, day of the week, month of the year, etcetera. You feed the software your parameters and it finds the best combination of them all for you, through genetic optimization.

Maybe you should switch to tradestation and we should split the cost of that software.

But I also found that the the same thing can be done, even better, via a brute-force optimization of the same parameters you know, except that you can do it for a lower number of parameter values, because it's doing brute-force optimization rather than genetic (smart) optimization.

I'd say to forget the Rina product instead. It sounds perfect for you:

Are you optimizing a system in TradeStation?

Do you want to periodically adapt you system?

Do you know its possible to use TradeStations optimization capabilities without curve fitting?

Do you want to know what would have happened by changing system inputs based on system performance for a variety of objectives?

Now you can apply walk forward optimization in a seamless environment with TradeStation to minimize the impact of hindsight for more robust system development.

It sounds, as I said. My friend, who's a programmer, very intelligent and hard-working, said - if I remember correctly - that it's a mess to configure that software. My usual recommendation is to keep things simple, to use the simplest tools, and to do the rest with our own brain, also via guesstimates. For example, for 10 years all I've been using is ts2000i and excel.

By the way, I am not sure you're using the term correctly "walk-foward optimization" (as a synonym for "out-of-sample"), because the way both Rina Systems Inc. and I mean it is that you continually optimize your system's parameters' coefficients as time goes by, and what their software does is tell you if, by doing that, you would have made money. In fact I think this is totally different from the concept of using an "out-of-sample" (even though somewhat related).

Let me show an example about this method, which I have abandoned 8 years ago. It's the concept that by continuously optimizing the parameters of your system you system will be in a better shape to trade the markets. I used to think like this, but not any more.

For example, and this is how my curiosity arose, you find out that if you create and optimize a system based on two moving averages it will work perfectly for a year, and then it won't work anymore.

So you start thinking: are we sure that it doesn't work anymore because it's wrong? Or maybe does it stop working because the markets change?

Then you wonder: what if I re-optimize these two averages every six months? Won't it work at least for the following six months?

That's what the Rina software should be for. To tell you what happens to your systems if you re-optimize them every six months, which would be very difficult to calculate manually.

But now I've abandoned those systems based on two moving averages because I've managed to create systems that, with the same rules/parameters and coefficients, work for the entire in-sample (usually 6 years or a bit less) and out-of-sample (usually 3 years or a bit more).

Oh, here it is:
http://www.amibroker.com/guide/h_walkforward.html

That page explains clearly what the "walk-foward test" is and, implicitly, it shows how it differs from the out-of-sample test:

Walk-forward testing
AmiBroker 5.10 features the automatic Walk-Forward test mode.

The automatic Walk forward test is a system design and validation technique in which you optimize the parameter values on a past segment of market data (”in-sample”), then verify the performance of the system by testing it forward in time on data following the optimization segment (”out-of-sample”). You evaluate the system based on how well it performs on the test data (”out-of-sample”), not the data it was optimized on. The process can be repeated over subsequent time segments. The following illustration shows how the process works.

walkfwd2.gif

In fact they're more related to one another than I was saying. In fact the "walk-foward test" is nothing but a repeated "optimization of the in-sample PLUS out-of-sample test" throughout the whole sample subdivided into smaller samples. So you still should not use the term because "walk-foward test" defines a specific method and procedure of "in-sample optimization and out-of-sample testing". What you are referring to is merely doing an "out-of-sample test", don't you agree?

Anyway, if what you do need is precisely "Walk-forward testing", then also Amibroker offers it, besides Rina (used with tradestation). At least that's what they say on that page. I don't trust it, like anything that makes things more complex: there's many dangerous implications in making things as complex as they get with "walk-foward testing".

To recapitulate let me concisely sum things up once again. Walk-forward testing is a form of out-of-sample testing, that divides the sample into many sub-samples and sees the effect of optimization on each one of them.

Wait...

This may be something I was missing out on.

The premise of performing several optimization/tests steps over time is that the recent past is a better foundation for selecting system parameter values than the distant past. We hope is that the parameter values chosen on the optimization segment will be well suited to the market conditions that immediately follow. This may or may not be the case as markets goes through bear/bull cycle, so care should be taken when choosing the length of in-sample period.

Yes and no.

No more than yes, actually.

Not at all, actually.

I was thinking for a second that verifying the effect of optimization throughout the sample would have been a healthy process.

But it is not, in that:

1) As the quote says, you could be picking a period which is too short to comprise all different types of markets. And instead of having a good six years, you will have six smaller useless samples. It's like the scene where Danny De Vito breaks up a cigarette in two, and wants to bet half a cigarette but Jack Nicholson replies that two half cigarettes are useless.

2) It complicates things as far as automation. It is not useful even if you find out that picking the most recent past is more useful than picking the whole past, because its implementation (as fas as my automation of systems at least) is too complex to make it happen anyway. Let alone the databases of system I have, where I have to list together the performance of 71 systems now.

3) It complicates things as far as back-testing, without adding enough benefits in terms of creating a profitable system.
 
Last edited:
Re: Optimising Saviour001 - known unknowns

I still can't put my finger on it but what bothers me about optimisation is that it seems like a manual or discretionary intervention which I should be able to build into the system so that it becomes unnecessary.

I don't mean I want to build a system that re-optimises itself every bar, I mean I want a system where I know that the moving average length that I am using has some sort of relation to a physical characteristic of the market price.

That is not an obvious or easy thing to program into a system though. If I decided that the moving average length should be 5 * ATR, then the moving average length will change with every bar, so no trading platform that I've seen will directly implement it - I would have to keep an array of the moving average values from length = 1 to 100.

I'm just wasting time here. I'm not going to find any relationship between price characteristics and optimal parameter values any time soon now.

That's exactly why I decided to go with a half automated solution, where I change the parameters every month to match the market characteristics. Otherwise the system is not always going to fit and can only result in excessive drawdown.
 
Saviour001's optimised results on optimisation window

HTML:
Performance	All Trades
Total Net Profit	$577095.00
Gross Profit	$2143120.00
Gross Loss	$-1566025.00
Commission	$0.00
Profit Factor	1.37
Cumulative Profit	$35051.47
Max. Drawdown	$-12537.53
Sharpe Ratio	0.14
	
Start Date	01/01/2000
End Date	30/06/2008
	
Total # of Trades	2589
Percent Profitable	50.10%
# of Winning Trades	1297
# of Losing Trades	1292
	
Average Trade	$222.90
Average Winning Trade	$1652.37
Average Losing Trade	$-1212.09
Ratio avg. Win / avg. Loss	1.36
	
Max. conseq. Winners	7
Max. conseq. Losers	6
Largest Winning Trade	$11200.00
Largest Losing Trade	$-7785.00
	
# of Trades per Day	0.83
Avg. Time in Market	8.79 days
Avg. Bars in Trade	201.0
Profit per Month	$347.99
Max. Time to Recover	855.16 days
	
Average MAE	$917.75
Average MFE	$1134.13
Average ETD	$911.23

This is the average or cumulative results for 13 forex pairs with DTC data from 2000 to mid-2008.

As yet no equity curve.

So I must think about it. Is this the finished product?

I looked at the moving average lengths, I don't want to change them.

I looked at the ATR setting, I don't want to change that.

I've optimised the fixed stop and target distances - that's probably the first thing that will break down when testing in the walk-forward out-of-sample window.

The trigger and the filters are still as described 4 or 5 posts above - so there's no ADX and there's only about 300 trades per year. Not what I wanted but I feared that increasing the frequency more would lead to worse stability, just from looking at the results from shorter time frames and shorter moving averages.

I looked at the time frame and I'm keeping it 45min bars. At least that's one thing I'm happy with.

Right. I can't think of anything I can do to this system to make it more robust so I'm going to do the walk-forward / out-of-sample test on it now.
 
Re: Optimising Saviour001 - known unknowns

That's exactly why I decided to go with a half automated solution, where I change the parameters every month to match the market characteristics. Otherwise the system is not always going to fit and can only result in excessive drawdown.

Bill Dunn of Dunn Capital re-optimizes once a year.. you're not alone.
 
Complete loser

That put paid to that little system. Complete wipe-out, losing system.

It wasn't even curve-fitting, it just lost money in most instruments most of the time.

Maybe I should allow myself to run a rough unoptimised version of the system on the walk-forward out-of-sample period as soon as I think I've found something so i can be sure the results look roughly the same in the future before doing all this faff with the optimisation.

Then again, I think I'll try to avoid systems requiring optimisation.
 
software for out-of-sample and walk-forward testing

[snipped...]
In fact they're more related to one another than I was saying. In fact the "walk-foward test" is nothing but a repeated "optimization of the in-sample PLUS out-of-sample test" throughout the whole sample subdivided into smaller samples. So you still should not use the term because "walk-foward test" defines a specific method and procedure of "in-sample optimization and out-of-sample testing". What you are referring to is merely doing an "out-of-sample test", don't you agree?

Anyway, if what you do need is precisely "Walk-forward testing", then also Amibroker offers it, besides Rina (used with tradestation). At least that's what they say on that page. I don't trust it, like anything that makes things more complex: there's many dangerous implications in making things as complex as they get with "walk-foward testing".

Thanks Travis. The main problem with the automated walk-forward / re-opimisation software is that it choses the absolute best optimised values for parameters but that best value might be a random outlier in a swathe of poor results, but you actually want a different value where all the near-by values are also good.

I think the term walk-forward is more descriptive of the process you quoted from Amibroker. Out-of-sample is perhaps the better term for a single test on unseen data.
 
Oh, good. You say "I think the term walk-forward is more descriptive of the process you quoted from Amibroker. Out-of-sample is perhaps the better term for a single test on unseen data". So you'd agree to just call it "out-of-sample" from now on, because otherwise it's confusing when I read your posts.

I also agree with what you say about the walk-foward optimization picking as "best" some random and wrong values (outliers) and so on, so let's throw the walk-forward concept and machinery down the toilet once and for all. It's more complex, it's dangerous... all bad. Down the toilet.

Regarding your post before this last one, don't forget to try creating the opposite system, if a system fails so badly on all markets. Also, if the system fails in the in-sample, don't bother to peak at the out-of-sample, or you'll learn about how it behaves and compromise the validity of that test.
 
Last edited:
Re: Complete loser

That put paid to that little system. Complete wipe-out, losing system.

It wasn't even curve-fitting, it just lost money in most instruments most of the time.

I'll fade that system....

Could you not?
 
The system spent 8 years doing one thing, then two years doing the opposite - so who's to say which it's going to do in 2011?

So I'd have to look into it - investigate the trades and so forth - and I don't think it's worth it. It lost money but all round but I didn't bother to stop and check how consistently.
 
Well, there you go. Your out-of-sample was too short. 8 years vs 2 years is not good. You know what has happened in 2008 and 2009? They're not normal years: all markets behaved abnormally especially in the second part of 2008, enough to make any good system unprofitable. I would recommend using 2000 to 2005 (included) as in-sample, and all the rest as out-of-sample, for any market.
 
Optimisation & out-of-sample periods - art not science

Well, there you go. Your out-of-sample was too short. 8 years vs 2 years is not good. You know what has happened in 2008 and 2009? They're not normal years: all markets behaved abnormally especially in the second part of 2008, enough to make any good system unprofitable. I would recommend using 2000 to 2005 (included) as in-sample, and all the rest as out-of-sample, for any market.

I've been cogitating on this. The main problem is, those markets on that data just lost money for 2 years in a row. I can't trade that. Even if I'd used the periods you suggest, the results wouldn't have been that different.

The angel on one shoulder is telling me to ditch this system and forget about it. The devil on the other shoulder is saying I can save the core indicators I'm using but just transplant the algorithm for another smarter one.

But the problem with the periods remains. I based the decision to use 2000-01-01 to 2008-06-30 as my optimisation window and 2008-07-01 - 2010-12-31 as my out-of-sample period because:

(1) most of the FXCM data for my chosen forex pairs starts in 2008 - although I do have the data for the core forex pairs going back to 2006.

(2) my previous testing had shown massive profitibility for a considerable number of trial systems only during the credit crunch - but since the results from my testing are gone, I can't be more precise about the dates. I thought this "easy profits" period started in 2007 with the onset of the credit crunch. I may be wrong. The attached image shows the S&P volatility really was different only in 2nd half of 2008. Since I don't want to look at an equity curve for every trial system I write and since I don't want to be mislead by the results from this "easy profits" period, I decided to split the period half into my optimisation period and half into my out-of-sample period.

So I'm contemplating actually increasing my optimisation window and decreasing my out-of-sample.

What I could do is to remove 2000-01-01 to 2000-12-31 from the optimisation period and use it as a second out-of-sample. That means I can make my optimisation window 2001-01-01 to 2008-12-31 to include more Credit Crunch, and use 2009 and 2010 as out-of-sample still, with 2000 thrown in for good measure.
 

Attachments

  • vix.png
    vix.png
    21.8 KB · Views: 160
Last edited:
as close to science as possible

Thanks for caring about what I wrote.

I'd stick to these two principles.

1. The out-of-sample has to be at least 33% of the entire data set.

2. The in-sample and out-of-sample both have to include different phases/types of markets.

According to both principles, using the out-of-sample you used is not good. Instead, if you use 2006 to now as out-of-sample, your out-of-sample might now turn out to be profitable and even if the 2008 drawdown is huge you should not discard it.

Many of my systems have their biggest drawdown in the second part of 2008. They are good otherwise. If I had used an out-of-sample lasting from the second part of 2008 to the end of 2010, some might even be unprofitable (and others break-even or barely profitable). Should I discard them because of it? I don't think so.

That's why you need a bigger out-of-sample and I can tell you the proportions, too. Let's say that your drawdowns last one year for your systems (from peak to subsequent peak, not from peak to bottom). Can we choose an out-of-sample that lasts one year? Never. Because if it comes across a drawdown it won't even have the time to recover from it. Now, since many of my systems have drawdowns that do last one year or even two years (from peak to peak), I could never choose an out-of-sample lasting less than 3 years, or else a system will seem unprofitable just because, by chance, during the out-of-sample it is incurring its regular drawdown.

So we could add a third principle:

3. The out-of-sample has to be at least twice as long as the average drawdown for a system (of the type you create), so that a good system will have time to recover from a drawdown AND to make money, within the duration of the out-of-sample. If a system has drawdowns that last one year from peak to peak, then you need an out-of-sample of at least 2 years: one year for the drawdown, and one year to produce profit. But since we don't know from the start how long the worst drawdown will be, we should be on the safe side and have an out-of-sample lasting three times as long as the expected max drawdown (from peak to peak).
 
Last edited:
New system (for me) - FirstStrike

This one's thanks to Travis who put the link to the PDF on his journal.

I'm going to implement and test this First Strike system whose rules are dead simple:

at Monday's open, place stops 50 ticks above and below the market, OCO.

when one is filled, cancel the other and place a stop loss 60 ticks away.

if the stop is not hit, exit on Friday at the close.

According to the PDF, it returns good money over the last 30 years.

Slight problem: potentially large and long drawdowns, but that is on the GBP/USD, not across a basket of currencies which I'm going to test.

Those tick values - 50 tick entry level, 60 tick stop loss - are obviously based on GBP/USD that the author is focusing on.

What I want to do is change them to relative values so I can apply it across the board to any forex pair.

So the on the GBP/USD right now, the 100 bar ATR is 0.0030 for hourly, 0.0150 for daily and 0.0350 weekly. So 50 ticks represents 5/3 of the hourly, 1/3 of the daily or 1/7 of the weekly ATRs. I'll use that as the starting point.
 
Yeah, I am glad you used that link. The system is still too complex for me, as I can't still automate a takeprofit, nor a stoploss.

If you want to reduce the drawdown you could simply make it exit at the end of the (first) day. It might still work, with lower gains and lower losses.
 
Top