I was able to reproduce their update. But I travel. I will post as soon as I can.
I have AME too; I find that it works better and sometime very well when I set the parameter manually. Usually the optimizer in NeuroShell doesn't do good.
This is a benign target betting method. http://goo.gl/UK7DYH
I am experimenting it as MM on a trading strategy. I think that it could work well when conditioned to an MA on range bars.
World's First Bitcoin ATM Is Announced??
CURRENCY of the FUTURE?! World's First BITCOIN ATM in Cyprus [INFOWARS - Nightly News] - YouTube
Story needs to be verified though!
I ask again because there is no way you can get such good results with Kalman filters. I had a look at the charts you posted and noticed that the series in your pred Kalman.csv files lead outrageously. See screenshot on akra.
I have played with Kalman filters/smoothers already. Adaptive, recursive, online incremental and all possible derivations. My results were far from being as good as yours.
How did you obtain the indicator data series (pred Kalman.csv files) your charts rely upon?
The technology behind Eureqa is GEP “Gene Expression Programming”. It was introduced in 1999. Gepsoft has a commercial version of it but it has also been implemented in several libraries:
GEP: Downloads
C#
More AI...(GEP) Gene Expression Programming in C# and .NET
C++
Gene Expression...
My ensembles invariably work well so I wondered about the NFLT.
By the No Free Lunch Theorem (NFLT) no optimization algorithm exists superior to random choice. Or said differently there is no better model than any other model when averaged out over various market conditions.
Well? Ensembles...
It is customized. I don’t use a confusion matrix but a Q-Q plot. I maintain 2 equity curves: the equity from the meta algorithm and a random path equity for which each entry is random instead. I then draw the % of random paths that beat the model vs. theorical quantile and use it to dynamically...
The switching is taken care by an Ensemble Learning method. There is no lag induced.
Ensemble learning - Wikipedia, the free encyclopedia
Equity is almost as good as the best model in hindsight in the bag. The low bound of being wrong is guaranteed which is not the case with single models.