Recent content by danielfppps

  1. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    Hi Hamjii, Thanks for writing. We do not have a python platform that can interface with MT4. What we have is a programming framework that interacts with different front-ends using a single library (so you code a system once and can trade it within a variety of platforms). One of the front-ends...
  2. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    Actually the website was changed recently, so the new image is quite new :) If you're looking to leverage on our developments in NN I believe that subscribing will be completely worth it. However feel free to ask me any questions you may have about our community (by PM here or by using the...
  3. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    I am not seeking to prove my point to you ;) I am just trying to provide you with some guidelines to help you out. You're clearly free to take anything you want and ignore anything you don't deem reasonable :smart: Regarding the time dependence, you should realize that the market is not...
  4. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    The time dependence here is periodical so the example is irrelevant to the issue of financial time series. If you want to test this you need to use something that is alike a financial time series (exactly the same class of time dependence). Periodical examples are not an accurate proxy to the...
  5. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    I would go for a) since your algo wasn't able to get above-chance predictive power. In order to test the time dependence problem you need to find a machine learning technique that at least offers some ability to predict. Otherwise it is insensitive to the features that reflect the time...
  6. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    Model building is clearly independent of shuffling. However the results on a testing set are dependent on whether the data used for the building of the modeling contains or does not contain future data. So I am not talking about shuffling of the data used to build the model but about the...
  7. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    I also made that mistake at some point. Features may look similar when you look at them in this manner but in essence some relationships develop through time -which are not easy to detect for us but easy for a random forest - that make prediction of past examples with data built from future...
  8. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    The assumption of time independence is wrong, which is my point. I encourage you to test and verify this and also to quantify its influence on your results (I bet it's very big and much more positive than what you think!). I hope this helps :smart: PS: You cannot ignore the time character of...
  9. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    Hi Fabwa, Be careful with that "out-of-bag" error in random forests, they will certainly contain data-snooping bias because random forests are not aware of the time dependent component in your time series analysis. The random forest will keep some items out-of-bag for prediction but these...
  10. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    This was a simple system developed many years ago (in 2010), things have evolved since then.;) This NN system is evaluated using a normal back-test but within that back-test the retraining is done on every bar, training/prediction therefore follows a WFO pattern. Please note that all the...
  11. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    The NN does retrain every day in the way that Krzysztof mentions, so the whole test can be considered an out-of-sample regarding machine learning predictive capacity (it's a collection of all the 1D predictions based on daily retrained networks). However you clearly need to define a network...
  12. D

    Walk Forward Analysis - the only logical successor to backtesting [DISCUSS]

    Walk forward optimization is just a more complicated back-testing route where data-mining bias is harder to determine. Here are a few things worth considering: There is no concrete evidence that a system with successful WFO has a better chance at live trading success than a system optimized...
  13. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    Yes, those are calendar days. The drawdown is actually very shallow and long (so you cannot see it well) but some further analysis shows it (see attached for a drawdown decomposition of the strategy). Regarding future leaks, there is actually no possibility for data snooping within our...
  14. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    1) volume is the volume traded for each position in lots (values on the right axis). 2) yes, 880 would mean 880 days. 3) we normalize the lot size to risk a fixed dollar amount (since the simulation is non-compounding) on the SL, when using compounding we adjust it to risk a fixed percentage...
  15. D

    3rd generation NN, deep learning, deep belief nets and Restricted Boltzmann Machines

    Well I speak for my personal experience regarding modeling difficulty, it's simply much easier to get profitable results on the higher time frames. The simulation I showed you also has a high statistical significance +1600 trades from 1986-2014 :) This system makes one prediction every day and...
Top