Walk Forward Analysis - the only logical successor to backtesting [DISCUSS]

Darwin-FX

Junior member
35 0
Walk Forward Analysis - the only logical successor to backtesting [DISCUSS]

Hello,

I'm Darwin and this is my second article in which I will try to explain how a walk-forward-analysis works and what benefits it brings you as an EA trader.

But I do not just want to explain how a walk-forward-analysis works (as others already did this), no, I want to explain why it is the only logical way to analyse EAs.

And as people keep asking why I release free stuff, let me explain my motivations: I am targeting a job in the trading-economy, and this is not easy to get. Beeing known in the community would help me a lot to accomplish this goal, so I decided to release some of my private tools for free and/or open source to make myself a name over the next months. I am not here to sell you stuff, keep that in mind, please!


I know that the title is a bit provocative, but it's often easier to start a discussion with a controversy ;)
Also, I know that the best EA-traders can use normal backtesting and still be profitable - but most can not. And even if you can, a WFA is still better :p

I WILL RELEASE A WALK-FORWARD-ANALYZER-TOOL FOR FREE WITHIN THE NEXT FEW DAYS THAT DOES ALL OF THIS 100% AUTOMATED; UNTIL THEN, USE THE TIME TO MAKE YOURSELF FAMILIAR WITH THE CONCEPTS.

Nevertheless, none of this article is needed to use the walk-forward-analyzer tool, it will be as easy to use as the metatrader4-backtester.



After reading, you have 2 choices:
Agree with my arguments and once and for all get rid of the flawed backtesting-approach and use WFA in the future.
Disagree with my arguments, but then please try to argue with me.. Do yourself a favour, don't just stick to backtests because you "know" them or something like that.


Btw, a backtest has many more disadvantages compared to a Walk Forward Analysis than the ones I describe here. If you want to learn more on this topic, read my first article:
"Why backtests are worthless, fixed-logic-EAs are flawed and your parameters are bad [DISCUSS!]" (look it up in google or through forum-search)





Initial Situation

First, let's sum up what the essential parts of every trading system are!

1. The system's logic
The most obvious part! And for a lot beginners it's the only part they know, which is dangerous.
This might be a manual trading system or an expert advisor or any other form of fixed trading-logic / trading-system / trading-strategy (btw: all 3 terms name the same thing in this article)

So far, so good. But you all know that every strategy has some kind of variables/parameters (like the periods of moving averages or stop-loss levels etc), that are NOT FIXED(!) but can vary, which brings us to the second part.
(If you just set them to a fixed value because "this should work".. well, it wont, at least not in the long term)


2. The system's parameter-ranges
The ranges of the parameters are an ESSENTIAL part of every trading-system, as they determine the exact behaviour of it (tough the trading-logic always stays the same).

So, for example, a moving-average-period might range from 5-15 to capture short-term price movements.
It is not 6 and not 11 and not 14, it is 5-15, as the markets change, we can not choose a concrete value, ALWAYS a range!


3. The market, the amount of data, the desired characteristics
Every strategy trades on a market, so we want to determine on which. (eg EURUSD / H4)

But thats not enough, we also have to determine how many past-price-data we want to use to evaluate our possible parameter-choices.
Because, as I said, a system always has parameter-ranges, but for live trading we have to choose concrete values!

And we do this by evaluating all parameter-possibilities on the last X years of price-data, and after evaluation,
we end up with a huge list of possible and "independent" trading-systems (each of them with different parameters, but the same main-logic).
And each has it's own characteristics like "profit" or "profit factor" or "relative drawdown".

So, we also have to determine how to pick the "best" parameters.

But it's not as simple as saying "I want much profit", because the characteristics often don't hold in the future!
Instead we want to choose in a way that gives us a high probability of picking parameters that will succeed during live trading.


Simple, isn't it?





An illustrating example

The system's logic:
Let's suppose a very basic trading-system: "If the price moved more than X pips in the last Y days, a course correction will happen"
(this is not a valid strategy, its just thin air for the sake of simplicity).
The parameters would be X and Y in this case.

The system's parameter-ranges:
To make it all simple, I chose X to be 100-200 pips in this example, and Y to be 2-3 days. (also, just thin air!)

Amount of data & prefered characteristics:
Here we choose the last 10 years of data to evaluate the possible parameters on, and "profit" as prefered characteristic.
(tough, as I said, in reality profit is not a very good indicator for parameters that will have a good future performance)

The process:
Ok, now before we can trade that system, we make an optimisation on the last 10 years.
That means we backtest every possible parameter-combination for our system and choose the best in terms of "profit".

For sake of simplicity, here is a cropped example:

"If the price moved more than 100 pips in the last 2 days, a course correction will happen"
=> 1000$ in the last 10 years
"If the price moved more than 150 pips in the last 2 days, a course correction will happen"
=> 1200$ in the last 10 years
"If the price moved more than 200 pips in the last 2 days, a course correction will happen"
=> 1500$ in the last 10 years
"If the price moved more than 100 pips in the last 3 days, a course correction will happen"
=> 900$ in the last 10 years
"If the price moved more than 150 pips in the last 3 days, a course correction will happen"
=> 950$ in the last 10 years
"If the price moved more than 200 pips in the last 3 days, a course correction will happen"
=> 950$ in the last 10 years

Soo, According to our prefered-characteristic (profit), we would choose X = 200pips; Y = 2 days, and then just trade the strategy.
Well, that's the "normal" process of EA-trading, and I claim it does not work this way.





EA-Analysis: How to ask the right questions

Ok, now that I have described the current process and how it is all done, here comes the "new" part.
The goal itself always stays the same, we want to pick the best parameters (based on some kind of evaluation on the past), and then we want to trade live!

Remember: The only thing a backtest can tell is "How good did my system+parameters perform in the past".
But that is NOT what we want to know! Be sure that you really understand this.

Initial Question; What we actually want to prove with analysis.
"Does the way we choose parameters for live trading ('pick the one with best profit over the last 10 years' in the above example) give us a high probability to pick parameters that are profitable during live trading?"

So we are actually interested in the relationship of past-performance&future-performance, not backtest-results!!

If the answer is No, one or more of the 3 things described in "Initial Situation" are wrong. Might be the logic itself, the parameterranges etc..
If the answer is Yes, the performance in the past and the performance in the future are somehow correlated for our EA, and we can trade the system!






The logical evolution; From Backtests to Walk-Forward-Analysis

Step One: Backtesting - in it's worst form


Pro:
  • ** We will get parameters that performed well on a wide range of data

Contra:
  • ** Every single EA trader that used this method and then tried to trade an EA live, based on good backtests, can tell you: It just does not work this way.


  • ** Overfitting / Curvefitting!! We first optimise the parameters, and then test them, all on the same data.
    That means we have no clue if we have valid parameters or overfitted ones.

    Overfitting means, we optimised towards a random behaviour within our data, that just exists in this particular dataset, and will not hold in the future.

    That means, we captured a relationship that existed but was not a sound one. Like this:


    Don't fool yourself in thinking "ah, this wont happen"... Almost all "relationships" within the markets are like this, as most price-movements are random!!

    If you do not understand overfitting, google for more information, as it is our archenemy in mechanical trading.


  • ** No significance for future performance!! Remember that the initial question is not how good our parameters performed on the past, but how high the probability of succes AFTER the optimisation-timespan (so, "in the future", during live trading) will be.

    As we did not do any tests with our parameters that take into account the relative future, we did not even try to answer the initial question.
    We just answered the question "How good did our parameters perform in the past", not taking into account anything about the "future" => very bad.


  • ** Even if you could somehow magically invalidate my above points, because the parameters worked well on a huge amount of data, they are not really the best for the current market - just average good on all market-conditions.










Step Two: Backtesting using unseen/out-of-sample data


Notice: The first dataset, we use to optimise our parameters on, is called "in-sample" (is). The second, unseen, dataset is called "out-of-sample" (oos).


Pro:
  • ** We now have a lower chance to get overfitted parameters, as we use an independent dataset to validate our parameter-choices.


Contra:
  • ** Due to the infinite amount of senseless/unsound relationships within the markets, we still have a (too high) risk for overfitting, as chances are too high that we just got parameters that are valid (curvefitted) on both datasets, but not valid in the future.

  • ** If the system did not work in out-of-sample and you then begin to tune your parameters until you get good oos-results, your oos-results are not longer "unseen" and becoming "in-sample", which makes the whole approach using 2 datasets useless!

  • ** We still use a very larg part of our data (in-sample) to find the best parameters, which also means we use a lot "old" data. That is not a good decision as the behaviour of the markets in the past is not equal to the behaviour of today.

  • ** Not just our in-sample dataset is too huge, also our out-of-sample dataset is too huge and therefore un-realistic. In the example above it would be a few years, but would you really like to trade a system for years before choosing new, re-adjusted, parameters? I would not!










Step Three: Backtesting using a more realistic data-amount


Pro:
  • ** We now only use the recent market-behaviour to optimise our parameters, so we capture the market "at the moment", and not "10 years ago".

  • ** We not test our parameters on a timespan that is more realistic (as it is not years but months!)


Contra:
  • ** We only used a small part of the available price-data for our tests. This is not very efficient!

  • ** Ok, remember the initial situation, where we have settled on parameter ranges, amount of data to optimise on, and the "desired characteristic". Our analysis has the purpose to verify these choices, wether they are valid or not.

    But in this case, we only made one test with them, so we optimised on one part of the data, then we chose 1 parameter-combination and tested it on 1 "unseen" dataset.

    Facing the million/billion possible parameter-combinations an EA can have, and the infinite ways the markets can change to generate new and "unseen" behaviour, do you really think that 1 test, 1 datapoint, 1 past->future relationship, is enough to judge from? Of course not! So why are you still using normal backtests? ;)










Step Four: Walk Forward Analysis


So, as you might see, a Walk Forward Analysis is the same thing like doing a normal back- & out of sample-test, but we do it over and over again, so we end up not just with 1 test-case but with many (100-150 in most cases, up to 1000 if we choose very small test-period).

That way we can verify our system + our optimisation-methodology on many, many independent test-cases, which is THE reason why we want to use WFA instead of every other analysis-method described in here.

Pro:
  • ** For our final analysis-report, we only take into account the green test-results, as they are the "unseen future" relative to the red optimisation-windows.
    That way, we simulate the same process we would face during live trading: Optimisation on the past, trading on the (relative) future!

    That allows us to draw meaningfull answers to the initial question, as we only analyse performance in "the future".

  • ** We use all data available for our testing

  • ** We have 100-150 independent "PAST=>FUTURE"-relationship-tests, which gives us a clue about the future performance, not the past performance!

  • ** We avoid overfitting, as we use different datasets to optimise and verify our parameters

  • ** If we want to trade live, we simply make "one more step" of the WFA, optimise on the last available data (the "red" dataset would then end at the end of the chart), and then trade "in the future" (the "green" dataset would be our live trading). So we trade the system using the EXACT same methodology we have tested 100-150 times already.

  • ** Due to the frequent re-optimisation of parameters, the EA is also continuously re-adapted to the markets, which will most likely increase the overall profit.

  • ** A traditional backtest answers the question "How good was my EA in the past", whereas a Walk Forward Analysis answers the question "How good will my EA be in the future, during live trading".

  • ** It does not only evaluate an EA, it also evaluates the corresponding trading plan that determines how to pick the best parameters for live trading.


""Contra"":

  • ** Most EAs will not pass this test. But this is not bad, because lets be honest, almost all EAs in existance are bull****. So if almost all EAs tested with this approach would give bad results, that would be great.

    Even if a lot people do not like to be disillusioned about their "holy grail money printing machines", it's better to face the truth during EA-development and not during live trading.


Contra:
  • ** There are some limitations regarding this process which will be discussed in a later article, stay tuned! Also, I am currently working on more sophisticated anlysis algorithms, but it will take a few months until I can show you something.




The main advantage is that we get 100-150 independent test-cases, whereas a Backtest+Out-of-sample-test gives us only 1 test-case (or 1 datapoint).





I hope you like this article, and if you want to discuss it live, visit http://webchat.snoonet.org/forex (it is a web-based chat client),
then click on my nick (FX-Darwin or Darwin or Darwin-FX) and on "Nachricht" / "Message" / "Private Message" (or something like that).
Then you can chat with me directrly, and I will post all interesting chats on the forum.

But the best thing would be to discuss it public here on the forum, so please do so if you have anything to say on the topic, thanks!

-Darwin
 
Last edited:

NVP

Legendary member
36,775 1,880
hey D

so basically you are proposing traders test strategies real time .......no arguments there

N
 
Last edited:

Darwin-FX

Junior member
35 0
hey D

so basically you are proposing traders test strategies real time .......no arguments there

N

Hey N :p

Well, it is not about testing strategies real time, it is about testing strategies on the past after all. But just in a way that is the same as testing it real time :)

-Darwin
 
L

Liquid validity

0 0
I can't say I disagree.
By highlighting the weaknesses of optimised backtesting - namely curve fitting,
you are making a very valid point, as it is a trap that most trying this will inevitably fall into.

Personally I'm not a fan of any automated optimisation full stop.
For me, I used all the data I had for backtesting to increase the sample size.
Parameters kept to a minimum and optimisation was along the lines
of Ed Seykota's hunt and peck style.
I also opted for using all of the data to backtest based on Eckhardts thinking
regarding wasting data on OOS testing and degrees of freedom:
http://www.futuresmag.com/2011/03/04/the-battle-between-optimization-and-curvefitting
Throw in influences from David Harding and Taleb for good measure.

TBH, with a decent backtested sample size, upwards of say 3000, I personally
think its pretty hard to curve fit as long as degrees of freedom are kept in
check and manual optimisation along the lines of Seykotas hunt and peck
method are used.

For me walk forwards optimisation is what is being discussed here,
it isn't true walk forwards testing in the sense of live realtime testing.
Any kind of automated optimisation can be abused just as much as backtesting
if the parameters make no sense and there are too many degrees of freedom.

On the whole though, good post and good threads, both of them.
Raising this whole issue certainly does no harm if it stops people from falling into
the traditional traps of low sample size and automated optimisation curve fit
methods.

For me, the final acid test has to be realtime live forwards testing.
No matter what else you do that stage cannot be bypassed anyway as it
is the only form of testing that includes real spread, comms, slippage and
technical issues.
Thats what I did anyway.
 
Last edited:
  • Like
Reactions: Shakone

random12345

Established member
793 279
I can't say I disagree.
By highlighting the weaknesses of optimised backtesting - namely curve fitting,
you are making a very valid point, as it is a trap that most trying this will inevitably fall into.

Personally I'm not a fan of any automated optimisation full stop.
For me, I used all the data I had for backtesting to increase the sample size.
Parameters kept to a minimum and optimisation was along the lines
of Ed Seykota's hunt and peck style.
I also opted for using all of the data to backtest based on Eckhardts thinking
regarding wasting data on OOS testing and degrees of freedom:
http://www.futuresmag.com/2011/03/04/the-battle-between-optimization-and-curvefitting
Throw in influences from David Harding and Taleb for good measure.

TBH, with a decent backtested sample size, upwards of say 3000, I personally
think its pretty hard to curve fit as long as degrees of freedom are kept in
check and manual optimisation along the lines of Seykotas hunt and peck
method are used.

For me walk forwards optimisation is what is being discussed here,
it isn't true walk forwards testing in the sense of live realtime testing.
Any kind of automated optimisation can be abused just as much as backtesting
if the parameters make no sense and there are too many degrees of freedom.

On the whole though, good post and good threads, both of them.
Raising this whole issue certainly does no harm if it stops people from falling into
the traditional traps of low sample size and automated optimisation curve fit
methods.

For me, the final acid test has to be realtime live forwards testing.
No matter what else you do that stage cannot be bypassed anyway as it
is the only form of testing that includes real spread, comms, slippage and
technical issues.
Thats what I did anyway.
It's extremely difficult to generalise any of this stuff though, each person has to know their system(s) inside out to know any weaknesses and applicable logic - depending on how capable the entry or always in mechanics are at dealing with ranging often determines to what extent curve fitting is truly detrimental. If we take an EMA or TEMA as the ultimate example of something very incapable of dealing with ranging, then curve fitting these is obviously a disaster for implied future profits and even worse on small samples.

The complexity of my exit criteria means I am forced to use relatively small samples to optimise (which I am a fan of) for feasibility alone hence my recent (and expensive) investigation into Xeon Phi racks etc, but that does not mean I am worried about the feasibility going forward. Sampling volatility in phasing markets isn't an entirely flawed concept - look at how the Japanese Yen gradually phased into non volatility post Kuroda just recently.
 
L

Liquid validity

0 0
It's extremely difficult to generalise any of this stuff though, each person has to know their system(s) inside out to know any weaknesses and applicable logic - depending on how capable the entry or always in mechanics are at dealing with ranging often determines to what extent curve fitting is truly detrimental. If we take an EMA or TEMA as the ultimate example of something very incapable of dealing with ranging, then curve fitting these is obviously a disaster for implied future profits and even worse on small samples.

The complexity of my exit criteria means I am forced to use relatively small samples to optimise (which I am a fan of) for feasibility alone hence my recent (and expensive) investigation into Xeon Phi racks etc, but that does not mean I am worried about the feasibility going forward. Sampling volatility in phasing markets isn't an entirely flawed concept - look at how the Japanese Yen gradually phased into non volatility post Kuroda just recently.
Yeah fair point, its all opinions based on what each of us has found to work
at a personal level.
In terms of efficiency it may well be better, I wouldn't argue that at all.
You have far more experience with this than I do, which is why I
stuck to Eckhardts and Seykotas principles.

I've tried Ninjas genetic optimiser as few times, not a massive fan.
I spose it depends on the core methodology of your optimisation process.
At a guess you have coded the whole lot from scratch in Python, which is
beyond me.
Thats what it boils down to - I don't have the ability to go down the
complex route, so a simple robust approach was the only realistic option
for me.
 

Trader333

Moderator
8,502 881
WFA still presumes that history will repeat itself but for a shorter time than using historical back testing and that in itself is questionable in my view.
 
  • Like
Reactions: avano

random12345

Established member
793 279
Yeah fair point, its all opinions based on what each of us has found to work
at a personal level.
In terms of efficiency it may well be better, I wouldn't argue that at all.
You have far more experience with this than I do, which is why I
stuck to Eckhardts and Seykotas principles.

I've tried Ninjas genetic optimiser as few times, not a massive fan.
I spose it depends on the core methodology of your optimisation process.
At a guess you have coded the whole lot from scratch in Python, which is
beyond me.
Thats what it boils down to - I don't have the ability to go down the
complex route, so a simple robust approach was the only realistic option
for me.
Agree with you - it's all about finding a concept that works in live and figuring out the strengths and weaknesses as they arise (the latter being an incredibly slow and arduous process), which is why I tend to avoid general discussion as there are whole books written about the topic without a single clue on how to actually make any cash. I imagine there are many conflicting, yet profitable, realities.
 

Darwin-FX

Junior member
35 0
I can't say I disagree.
By highlighting the weaknesses of optimised backtesting - namely curve fitting,
you are making a very valid point, as it is a trap that most trying this will inevitably fall into.

[...]

TBH, with a decent backtested sample size, upwards of say 3000, I personally
think its pretty hard to curve fit as long as degrees of freedom are kept in
check and manual optimisation along the lines of Seykotas hunt and peck
method are used.
Sure, but it seems that you exactly know what you are doing, most people don't. And if you don't, it is also quite easy to find overfitted systems with 3000 trades ;)
When I began to write algos for trading, I made a test, I just generated systems (without optimisation of any kind) on 8 years.

Then I took all systems that behaved quite well on these 8 years, made an out-of-sample test for 2 more years, and again I filtered out all bad systems.

The remaining systems were then tested on the remaining 3 years, and most of them failed miserably.
That was, of course, because an algo is dumb and does not "exactly know what it is doing".

So there is a need, at least for non-professionals and algos, to have a testing method on which you can rely more than on backtest+oos-testing





For me walk forwards optimisation is what is being discussed here,
it isn't true walk forwards testing in the sense of live realtime testing.
Any kind of automated optimisation can be abused just as much as backtesting
if the parameters make no sense and there are too many degrees of freedom.

For me, the final acid test has to be realtime live forwards testing.
No matter what else you do that stage cannot be bypassed anyway as it
is the only form of testing that includes real spread, comms, slippage and
technical issues.
Thats what I did anyway.
I can only agree with that :)
And you can also mess up your results if you just try hard enough to abuse the WFA, but it is at least harder to do so.

And of course, there is nothing like live testing, but the problem remains, you need quite a long time of live testing to get a statistically relevant amount of trades to judge from.





WFA still presumes that history will repeat itself but for a shorter time than using historical back testing and that in itself is questionable in my view.
Well it does not presume it, it helps you to verify it.
But of course this can only be done on past-data, you can only verify the fact that history repeated itself in the past, but you can not be sure that history repeats itself in the future. But you can never be sure of that, i fear :/

I don't really get the second part, why is it for a shorter time? How do you mean that? :)





I imagine there are many conflicting, yet profitable, realities.
Guess thats one of the most not-arguable posts ;)
Nevertheless, a discussion is always a good thing!


-Darwin
 
Last edited:

numbertea

Well-known member
257 9
Hey N :p

Well, it is not about testing strategies real time, it is about testing strategies on the past after all. But just in a way that is the same as testing it real time :)

-Darwin
As you state here, it is the same as backtesting. If you like to split up your data and backtest on different data sets at different times then WFA is for you. Myself, I backtest on all the applicable data I have for a single run as I find that then I am exposed to the most I can test on. If you think there is a difference between WFA and backtesting then you have programmed your backtesting software incorrectly.

Cheers
 

Darwin-FX

Junior member
35 0
As you state here, it is the same as backtesting. If you like to split up your data and backtest on different data sets at different times then WFA is for you. Myself, I backtest on all the applicable data I have for a single run as I find that then I am exposed to the most I can test on. If you think there is a difference between WFA and backtesting then you have programmed your backtesting software incorrectly.

Cheers
It uses the same data, yes, but it's definitively not the same as a backtest.

Backtesting evaluates a given parameter-set, analysing performance.
WFA evaluates a way to choose parameter-sets, analysing the probability of future success.

So, could you please go a bit further into detail, explaining why you think it is the same? :)

-Darwin
 

numbertea

Well-known member
257 9
It uses the same data, yes, but it's definitively not the same as a backtest.

Backtesting evaluates a given parameter-set, analysing performance.
WFA evaluates a way to choose parameter-sets, analysing the probability of future success.

So, could you please go a bit further into detail, explaining why you think it is the same? :)

-Darwin
Ahhh ha. Now I think I understand what you are saying. Are you saying that you analyze time frames of data and then fit those with most current data groups to choose which algorithms you will use? I don't do that. I just use the overall data so that I can select patterns to use there. I can't imagine trying to predict what the future data set will look like as well as looking for repeating patterns within different types of data sets. How do you categorize the different data sets that you come across? I leave the categorizing to the algorithm of the patterns that hit or don't hit. I see no need to categorize data sets. Am I still misinterpreting your method?

Cheers
 

NVP

Legendary member
36,775 1,880
hey Darwin

all interesting stuff ............are you sucessfully translating this testing into profitable trading ?

N
 

Darwin-FX

Junior member
35 0
Ahhh ha. Now I think I understand what you are saying. Are you saying that you analyze time frames of data and then fit those with most current data groups to choose which algorithms you will use?
Well, the WFanalyzer in its current form can not "choose which algorithms to use", it can just choose "which parameters to use".
Tough, choosing different algos is somehow planned, at least I made sure my code-design for the new algo has everything in place, so I could (and want to) implement such an optimisation procedure :)





I just use the overall data so that I can select patterns to use there.
You know what you do, thats the difference. But algos are dumb (even the "intelligent" ones) and many traders not-enough-educated , so there is the need for a more robust testing approach :)





I can't imagine trying to predict what the future data set will look like as well as looking for repeating patterns within different types of data sets.
Well, I don't try to predict the future datasets, I just analyse (based on past data) how correlated the performance on the past dataset and the (relative-)future dataset is.




How do you categorize the different data sets that you come across? I leave the categorizing to the algorithm of the patterns that hit or don't hit. I see no need to categorize data sets.s
I am not sure if I understand this, datasets are not categorized or something, the initial dataset is just splitted up into smaller parts, as you can see in the last image in the initial post.




Am I still misinterpreting your method?
Depends, did I misinterpret your post? :clap:
If so, please tell me, so we make sure we understand each other before further discussing :D






all interesting stuff ............are you sucessfully translating this testing into profitable trading ?
No, the wfanalyzer that I will release soon was a proof-of-concept.
The new algo that I am currently writing is the actual project, and before the "real" algo is not up and running I can not make any trading :)

Also, before that, I have to write an EA-builder, because I can not trade a analysis-method without trading-system haha. But that should not be very hard, as the analysis-part is the hardest when writing this kind of stuff.

All I can give you is a logical argumentation, no hard numbers (yet), but thats the purpose of a discussion, isn't it? :)

Also, guess why I am looking for a job in the trading economy? ;)
Because, as a college student (at least on the paper, actually I do trading/coding fulltime since a few years), I lack the money I need as start capital.. :)

"all interesting stuff ............" < sarcasm?

-Darwin
 
Last edited:

AdBlock Detected

We get it, advertisements are annoying!

But it's thanks to our sponsors that access to Trade2Win remains free for all. By viewing our ads you help us pay our bills, so please support the site and disable your AdBlocker.

I've Disabled AdBlock