Build Neural Network Indicator in MT4 using Neuroshell

Here is the implementation of Trained net on NS predictor.

So far I found 2 methods:
1. Call the data into excel and firing the net.
2. Open the test data and use the trained net to fire up. I got R-squared 0.995045, MSE 0.000004, %sign 100% with 4998 test pattern. Compare with training pattern R-squared 0.999942, MSE 0.000001, %sign 100% with 199997 train patterns.

See attached pdf file for the actual report.

Fralo, have you make any MT4 indicator to show predicted value on the chart? or create an EA to generate trading signal?

Best regards,
Arry
 

Attachments

  • Trained Net Implementation on Test data.pdf
    1,015.3 KB · Views: 597
Hi Krzys,

Please find here attached Training and Test result for NS2, the detail is explained on the pdf file.

I made my own trick to combine your train and test data as a file and adjust the training and test range as per original configuration. Using 5 layers backpropagation network, the result: minimum average error: 0.0000302 (0.3 pip) on 19997 train pattern and 0.0000624 (0.6 pip) on 4998 test pattern resulted within 8 min 40 s.

The second file using NSpredictor only for Train pattern: NN method resulting MSE 0.000001 and R squared 0.999942, and using GA resulting MSE 0.000005 and R squared 0.999743. The training only within 16 second for NN and less than 3 minute for GA. The NN stop by itself and GA I stop manually (require more times). I need to find the way how to indicate the result with the test pattern.

Use the same password as the last file password, any other who requested please PM me.

Regards,
Arry

I think error given in MBP and predictor is RMS but what sort of error is given in NS2 ?? MSE ?? Any idea ??

Than because results from NS2 and predictor are very good achieved in virtually 'no time' what comes to my mind is that the net from NS2 and predictor are very overfitted so a lot of weights.

Arry, can i pass those results to specialist from NN for finding the reason of this
'wonderfull results' and giving some comments ??

Krzysztof
 

Attachments

  • err.JPG
    err.JPG
    34.4 KB · Views: 344
Fralo,

Here is the NS2 output file, it includes all train and test patterns, including actual, prediction and their differences.

Kryzs, you can verify with the output data to calculate your MSE or RMS. I just shown the actual result..without any adjustment..

Arry
 

Attachments

  • train test out.zip
    404.4 KB · Views: 326
Hi Kryzs,

Here is the definition used for judging good/not our net.

Arry
 

Attachments

  • Best Net Statistic.pdf
    75.2 KB · Views: 535
Here is the implementation of Trained net on NS predictor.

So far I found 2 methods:
1. Call the data into excel and firing the net.
2. Open the test data and use the trained net to fire up. I got R-squared 0.995045, MSE 0.000004, %sign 100% with 4998 test pattern. Compare with training pattern R-squared 0.999942, MSE 0.000001, %sign 100% with 199997 train patterns.

See attached pdf file for the actual report.

Fralo, have you make any MT4 indicator to show predicted value on the chart? or create an EA to generate trading signal?

Best regards,
Arry
Great stuff Arry, but can you use one of the programs (NS2 or Predictor) to run the net and actually get the prediction output as a file or printout? Maybe the "file export" in NS2 will export a predicted output?

I have not tried to make an EA or indicator. That would not be too hard, if DLL is made to calculate net output. The nets that Krzys and I have trained did not achieve mse better than 11 pips, and I think we need also a net to predict low to make an EA. 11 pips is too large to be useful, I think, but 1 pip (your net) would be very useful.

Maybe you could set a value for M in NS2 to get a production set, and retrain to find the error on the production set. Since we have ~20K data, maybe train with 10K, test with 5 K and production with 5K (N=10000,M=5000)?

Because MBP does not use the test set to optimize, and one can train it without reference to the test set, we did not use a production set. The test set became the production set. We used the code that MBP generates and compiled an exe file to operate on the test set independently of MBP, so that we could get essentially a production set.(All that is contained in the zips)
Regards,
fralo/MadCow
 
Fralo,

I tried to extract data following your advice: 15000 as training patterns, 4997 as test patterns and 4998 as production patterns.

See attached result in pdf and output file.

I am verifying the data using Chaos Hunter to get the relation between input and output. I found that the relation between input and output net is 'very-very simple'. Why? just using single input or two output combination to get output equation we achieve R-squared more than 0.99xx. This might the answer why the net is so sophisticated. But the question remain is why with NS can see as a simple thing but not with MBP..?


Arry
 

Attachments

  • to simple CH.png
    to simple CH.png
    9.7 KB · Views: 495
  • 2nd train test prod output.zip
    404.4 KB · Views: 248
  • Using NS2 train Test Prod.pdf
    106.7 KB · Views: 490
Fralo,

I tried to extract data following your advice: 15000 as training patterns, 4997 as test patterns and 4998 as production patterns.

See attached result in pdf and output file.

I am verifying the data using Chaos Hunter to get the relation between input and output. I found that the relation between input and output net is 'very-very simple'. Why? just using single input or two output combination to get output equation we achieve R-squared more than 0.99xx. This might the answer why the net is so sophisticated. But the question remain is why with NS can see as a simple thing but not with MBP..?


Arry
I am not sure what is going on here. If you use Excell to calculate the standard deviation for the "Act-Net(1)
" column in the zip file, you get .001898 or about 19 pips. If you calculate the running std dev for 500 samples you get a variation from a low of about 8 pips to a high of about 37 pips. The std dev for the training set is 13 pips and so on. I think that std dev should be the same as rmse. rmse would maybe be sqrt(mse)... aha! :smart: sqrt of .0001 is .01. That must be the difference. NS2 must report mse, but that is not in pips. To get pips, take the sqrt. Now the results are essentially the same.

If that's the case, then we need to work some more because 11 pips is too much for real time trading I think.:(
Regards,
fralo/MadCow
 
Fralo,

Yes you are correct, then objective for training the net is not to minimize MSE but to minimize RMSE..I think

Arry
 
Fralo,

Yes you are correct, then objective for training the net is not to minimize MSE but to minimize RMSE..I think

Arry

Hmmm... I think since sqrt is monotonic that if you minimize one you minimize the other. So the objective is OK. We just need to be careful how we interpret it.
 
For sure the difference is in network configuration. 11 pips we got from MBP with 15-30-10-1 and space network. Arry is using 15-50-xx-xx-1 what I understand (no idea what slab3 & 4 has) and the propabilistic nets. So the key is to find out how many degrees of freedom is left in both nets after training.

Krzysztof
 
For sure the difference is in network configuration. 11 pips we got from MBP with 15-30-10-1 and space network. Arry is using 15-50-xx-xx-1 what I understand (no idea what slab3 & 4 has) and the propabilistic nets. So the key is to find out how many degrees of freedom is left in both nets after training.

Krzysztof
Krzys... I think the difference is in the way that the error has been reported by the network training program. MBT reports RMSE.. which is in pips if output denormalized. NS2 reports MSE... take the SQRT to get pips. So 0.00006 mse ->.0077 rmse or 77 pips. But here again I think there is some sort of normalization taking place. The actual rmse (in pips) of Arry's net is more like 19 pips over all the data (see my post above).

Because it is not clear to me what is actually reported by MBP (normalization factor), I think the best way to measure the error is to run the net on data and look at the std dev of the error. I think all the nets are in the same ballpark, and none are yet good enough. (We have 19 pips from NS2 and 11-15 pips from MBP).

It may be time to look at other inputs and outputs, but we must be careful about interpretation of performance.
It seems too early to make a trading system that can be tested with your other tools. Performance would be pretty poor with errors so large as 11 pips (although on daily data 11 pips might be OK), so lets find something that has better prediction. Maybe it's time to look at the multi-market problem, but I think we need to have at least 10K samples, and I think that we should use a smaller TF than daily.

Alternatively, maybe we should look at the stationarity problem. The rmse of the 11 pip net varies a lot depending on the period that is tested. The rmse of the 19 pip net also varies a lot. Maybe we need to look for inputs that are stationary, or find a way to adapt the inputs, e.g. using the difference between High and an SMA of High, etc. This will approach stationarity for our purposes.

What do you think?

fralo/MadCow
 
I already made analysis of other inputs. have you seen my post at FF ?? i used there Ehlers filters and lagged and momentum data as an input to NN prediction in NS.

Today i made similar analysis but used prices transformed with fisher transform and results are quite interesting. I will post those results there now so you can have a look.

Krzysztof
 
This is input analysis done using NS for price transforemd with Fisher transform for 3 sample size

52k, 6k and 1.8k 1 min EURUSD. The objective was to check if Fisherized price is a more valuable input for NN than normal price. Screens are self explanatory.
 

Attachments

  • f1.JPG
    f1.JPG
    141.4 KB · Views: 504
out of sample results
 

Attachments

  • o1.JPG
    o1.JPG
    68.1 KB · Views: 418
  • o2.JPG
    o2.JPG
    68 KB · Views: 335
  • o3.JPG
    o3.JPG
    68.3 KB · Views: 343
From those results i think is difficult to say if fisherized price is more valuable input than normal price.

However the biggest advantage which i see is that OOS performance is not decreasing almost at all.

I made similar test on the same data set but for different inputs (Output from different DFs and price) see http://www.forexfactory.com/showthread.php?t=68181&page=48 from post 719

and OOS performance always was decreasing significiant.
 
Here how looks OOS performance for input analysis for setup presented at FF thread.
 

Attachments

  • df1.JPG
    df1.JPG
    65.7 KB · Views: 406
  • df2.JPG
    df2.JPG
    67.7 KB · Views: 328
  • df3.JPG
    df3.JPG
    67.2 KB · Views: 302
I already made analysis of other inputs. have you seen my post at FF ?? i used there Ehlers filters and lagged and momentum data as an input to NN prediction in NS.

Today i made similar analysis but used prices transformed with fisher transform and results are quite interesting. I will post those results there now so you can have a look.

Krzysztof

I'm on my way to FF.
 
Top