#89902

However if you developed it on a bite size chunk of history then you will be very unlikely to find that it works on the whole of the rest of history

See this is where we differ i think. I like to optimize on a chunk of data (30-50%) and check the rest (100%) of the data. Lets say i optimize on 2006- > 2014, then i want to see the “same” results going from 2014 -> 2019.

 

Maybe this is just me, but i feel like that is the safest way to creating something robust, basicly meaning u made an algo in “2014” and u went “live” with it for 5 years (2019) now i would feel alot more comfy running this live.

 

question is tho, if u optimize on that 50% of the data, and find shitty results with 4/5 combinations when checkin then 100% of the data, is that robust or is that 1 combination (that you chose from out of 100’s of combination) just luck?