Automatic Trading: The Dangers of Over-Optimization

Discover the pitfalls of algorithmic over-optimization, and how to avoid them to optimise the performance of your automated trading strategy.

Algorithmic over-optimization: when the best is the enemy of the good

A perfectionist derivative of automatic trading, over-optimisation refers to the fact of reducing the general performance of an algorithm while seeking to increase it in a given context.

Having a time machine would certainly be a trader’s number one wish. Thus, with the knowledge of future price fluctuations, he would indeed have the power to “predict” the evolution of prices. Unfortunately, until proven otherwise, the time machine does not exist.

However, when he tries to optimise a trading algorithm in relation to a price history, the trader is in a similar situation (in which he also knows what will happen next). The temptation is then great to try to optimise the variables and conditions of his algorithm so that they fit perfectly with past fluctuations… At the risk of falling into over-optimisation!

over-optimisation

What is over-optimization in trading?

In order to understand the dangers of over-optimisation, it is necessary to recall what optimisation in the world of automatic trading consists of.

Far from simply removing dysfunctional parameters and superfluous lines of code in the algorithm, optimisation aims to make a trading system more efficient.

Once the algorithm has been designed, there is a period of beta testing during which the trader tests the reliability, performance and resilience of the program.

During this period, the trader can add new parameters to his algorithm, but also and above all optimise (or even delete) some of the variables and constants already used.

As you may have guessed, over-optimisation consists of over-optimizing the parameters of a trading algorithm, and in so doing, making it more “fragile” and less able to perform once launched on the financial markets in live conditions.

Examples of optimizable (and therefore over-optimizable) parameters:

  • stop-loss or take-profit distance;
  • specific parameter of a technical indicator;
  • time slots of activity;
  • and many others!

The pitfalls of over-optimization in trading

There is no clear limit to the extent to which an algorithm is over-optimized. However, this practice does have a concrete impact on the results of an algorithmic strategy (it may well be the first cause of failure of contemporary trading algorithms).

Because an algorithm can always be optimized, there is a great temptation to lose oneself in marginal improvements that bring little value. The challenge lies in the trader’s ability to constantly keep in mind that over-adaptation to a too small sample of data can very quickly prove to be counter-productive.

Thus, while backtesting is a great way to simulate the behaviour of an algorithm in the past and obtain valuable information in terms of performance and risk, you must be careful not to go too far.

Excessive adjustment of your algorithm’s variables (also known as overfitting) with the sole aim of maximizing the program’s performance or minimizing its risk in a specific context is a mistake that should not be made.

Indeed, by optimizing each variable on the basis of past data, you are certainly optimizing the performance and/or risk criteria, but you are doing so in a singular situation that is unlikely to be repeated in the future.

What is more, you are using market information that neither you nor your algorithm would have had access to at the time these market fluctuations took place…

How to avoid over-optimizing in trading?

Proceed with rigor

The first thing to do to avoid the dangers of over-optimization is to conduct a robustness test to evaluate the strength of your algorithm.

Tools such as the Walk Forward analysis proposed by ProRealTime allow you to compare the results of backtests conducted in the past (in-sample) with the results obtained on the market in real time (out-of-sample).

In addition to these backtesting practices, it is also better to anticipate the risks of over-optimization and test your algorithm in various environments such as:

  • periods of different lengths;
  • different histories;
  • different assets;
  • different sectors;
  • different market phases (both in terms of trend and volatility).

The more representative your sample, the more relevant your algorithm’s statistics will be and the more consistent your performance will be over the long term. Keeping a macroscopic view is therefore a major asset for your strategy.

Furthermore, limiting the number of parameters and simplifying your algorithm as much as possible are among the best practices to follow in order to reduce the risks of over-optimization.

In fact, if you want to add too many parameters, you could end up with a function that perfectly describes the history of your sample, without this being of any use, apart from the satisfaction of obtaining a nice simulated performance…

Finally, trying to identify as clearly as possible the market anomaly you are trying to exploit with your algorithm will help you work in the right direction. So rather than looking for an idea in the price history, try to look for a Trading Hypothesis first and then put it to the test in the markets!

Resist perfectionism

In addition to technical checks, your ability to avoid over-optimization will be heavily influenced by your investor psychology. Your resistance to perfectionism is a key to keeping a cool head.

Caution: This overzealousness is often found among novice traders, who are often in search of the Holy Grail of algorithms. This is a common mistake where the infatuation masks the critical eye that an investor must adopt.

In any case, whether you are working on an open source algorithm or on a creation of your own, it is essential to analyze in detail the performance of your algorithm but also and especially their statistical significance.

Avoiding the pitfalls of over-optimization requires constant vigilance. Technical and psychological vigilance are, therefore, two sides of the same coin. Without these qualities, the algorithmic trader risks simply wasting his energy, or even worse: working against himself!

Share this

  1. Kovit • 03/11/2021 #

    I have definitely been guilty of some of the above pitfalls regarding overzealousness. With experience I’ve set myself limits of anything that is robust over different dates and in a walkforwad and has a 75% win rate and gain of at least 10 gives a good buffer to then throw into demo live. Also I’ve learnt to never throw anything away as my most regular earner is a shorting bit of code I had took from one of my first long/short bots where the long code was rubbish.

  2. buffster76 • 03/11/2021 #

    I think the optimisation graphs in v11 are an excellent tool to help avoid “spikes” which give the perfect fit. The graphs allow you to perhaps choose a variable value that is not perfect but in a more stable (flatter) region and therefore probably (hopefully?) more robust. I like the 2d charts but haven’t managed to get a benefit yet from the 3d charts although it is very nice to look at. One suggestion would be to replace the 3d chart with a heatmap which could be easier to read and would show 2 variables

avatar
Register or

Top