1M backtest available… how do you manage it?

Forums ProRealTime English forum ProOrder support 1M backtest available… how do you manage it?

Viewing 9 posts - 1 through 9 (of 9 total)
  • #157467

    Hi guys, now that the 1m unit backtest is available how do you manage it?

    Are you trying to create a strategy on the whole million? That greatly expands the backtesting required time and difficulty but allows a theoretically more solid strategy.
    Or are you trying to use a smaller number of units? On which criterion?

    #157472

    hmmm…. initially i just thougt, yeah, run it all. But maybe….

     

    1. Code concept

    2. Backtest on last know “current“ market conditions. In this case, post US election and tweetgate. Nov 3+

    2.5 optimise

    3. When good, walk forward 70/30

    4. If still good, VRT test

    5. if still good, run it on all bars available

    5.5 rerun walk forward, VRT

    6. if still good, forward test with demo (optional for me 😬)

    7. If still good, order wife new handbag and me new golf clubs.

    Something like that, at least on paper anyway!

    1 user thanked author for this post.
    #157477

    Monobrow pretty well nails it there, IMO. I could add this psychological effect :

    Because 1M really is too much to deal with regarding “seeing through” what should be changed etc., which kind of combines with the backtesting taking too long (like 5 times longer than the normally available data (200K), you thus use the 200K or less. You optimize that, you may even over-optimize it accidentally, but when you are finally ready (thinking about hand bags already), you run the 1M. And *that* will be your real test. You will quickly start looking around for old hand bags new in box etc., because chance will be quite 100% that it totally failed. However :

    *Because* you worked so long on the 200K you will now easily see where you made the mistakes. Over-optimized or not. You will now change the new 800K such that the old 200K shall not gain less than it did. And if it does after all, you know what it is for. … It is for gaining extra in the 4 times larger part.

    Anyways … be ready to see e.g. 10K profit in your 200K bars. Also be ready to NOT see a total of 50K in the 1M bars. You will rather see a loss of 5K. 🙂
    Then apply the above and aim for the 50K …

     

    #157483

    I use all the data I can get. Build and optimize each variable with 1m bars @ 75/25, then VRT, then demo. Very time consuming, but gives a fuller picture of how it might perform under different conditions. I wanna see a 6 year histogram, green in every quarter, with solid OOS and VRT, otherwise … fuggedaboudit, buttalo via!

    #158019

    Hello,

     

    I have a question : Since when the tick by tick is available ? i wonder.

     

    Because without the tick by tick, the backtest is a little useless i think.

     

    Thanks,

    #158023

    Tick by Tick has been available for several years … see attached at red arrowhead.

    #159128

    Are you trying to create a strategy on the whole million? That greatly expands the backtesting required time and difficulty but allows a theoretically more solid strategy.

    Or are you trying to use a smaller number of units? On which criterion?

    I think that optimizing parameters on 1 million bars will firstly result in a further curve fit. Which does not guarantee future performance at all. Optimizing on 1 million bars means in most cases searching for the perfect algorithm that works under all market conditions : High and low volatility, market crash and a DAX price of 5.000 or 14.000. I don’t think that such an algorithm exists that will yield good results in out-of-sample conditions.

    1 million bars can be useful for time scales below 1 minute. For example, in a 10 second chart, you can backtest over many months now instead of 7-10 days only. Any backtest and curve fit has a broader statistical basis, then. And you can test this curve fit immediately out of sample in earlier time periods. For example, I have reoptimizied several of my 10 second algos over 1 million bars from March, 2020 to January, 2021 (rather high volatility). Backtesting these curve fits from 2015 – February 2020 (entirely out of sample, conditions of high and low volatility) showed that almost no algo yielded positive results in these periods. Most of them produced losses in the order of number of positions * spread. So, 1 million bars applied correctly can help to sort out even more non-performing algorithms.

    A few of my older algos retested over 1 million bars (mostly the whole time span from 2010-2021, no low time scale) have surprisingly shown quite good out of sample results. With these algo types I continue working right now.

    This has been my experience with 1 million bars, so far. And 1 million bars make you think a lot about the efficiency of your codes and parameter optimizations, because when they are too complicated, they will take an eternity to run.

    2 users thanked author for this post.
    #164225

    Does anyone know the historical data limitation on the 1 M backtest ? For Example a 2 min timeframe will return virtually the 1 M bars, but a 30 mins timeframe will return 145 k records for the FTSE.
    Are the limitations per timeframe or instrument and timeframe ? And specified in the top corner with the question mark of the PRT screen

    #164227

    It only goes back to Aug 2010 as that’s all the tick by tick data they’ve got. All TFs of 5 min and over will max out at that point. Otherwise it’s proportional – 2min is about 6 years, 1min is ~3 years etc.

    1 user thanked author for this post.
Viewing 9 posts - 1 through 9 (of 9 total)

Create your free account now and post your request to benefit from the help of the community
Register or Login