In Part 44 we answered a few questions from comments that I thought might apply to the whole group. I said we were going to discuss more about the EA, but first I want to talk a little about back testing.
I’d like to start with a question from a reader:
“As I told you earlier, I’m working on an EA to implement the XYZ Strategy. While it seems profitable over some periods, it loses badly over others. I’ve been fooling around with different parameters in Strategy Tester, and did an optimization run, varying SL pips and TP pips. The best combination gave a profit of about $500. But when I took the parameters from that run, plugged them into the input values and did a solo run just on that set of values, the results were a net loss (of about $100).
Why the difference?”
And I’m going to quote my answer to him:
“I’ve run into that situation myself and it’s very frustrating. One thing I’ve noticed is some EAs, particularly ones with a lot of code that runs in the start() function, skip trades in the strategy tester. Take a look at the total number of trades for each and see if that’s the issue. If so, you can “throttle” the tester down by using the visual mode and setting it at 30ish. It will slow the test down dramatically, but may be more accurate.
Another thing to check is the spread. The tester picks the spread at the beginning of the test from the current market – which, obviously, could be different each time, and uses that spread for the whole test. That would also make a dramatic difference in your results. That could also be an issue. A new feature of the tester allows you to pick your spread instead of using the variable spread, which could be a solution to the problem, but I haven’t experimented with that yet. ”
I’m not completely confident in the Strategy Tester for back testing strategies. As I mentioned, I’ve seen too many anomalies to have any warm and fuzzies when it comes to ST back test results. And of course there are the horror stories about brokers modifying the data feeds to “break” successful EAs. I’ve actually seen that with my own eyes. Right here at Winner’s Edge we had two trading accounts with the same broker, running the exact same EA. One account was significantly larger than the other. Over the course of two weeks time, the smaller account won and lost as we anticipated. The larger account never had a winning trade during that time. Upon some study, we discovered the broker had opened the spreads on that account during critical times that was causing the strategy to lose. But I digress.
As we’ve already discussed, your data set is THE most important thing to ensure your back test is as accurate as possible. Your Modelling Quality MUST be over 90% (I like to see 99%) because it represents the completeness of your M1 data set. The report in the image shows a very low modelling quality. This is not acceptable for accurate back testing. MT4 actually takes one minute data and “models” the tick data If you’re interested in the specific algorithm for modelling, you may want to see this article. It discusses MT5, but I suspect the algorithm is the same as MT4.
With that having been said, as long as you have good data, the algorithm for modelling the ticks is not important for EAs like ours that wait for a candle close before taking action. If you have an EA that makes trading decisions on each tick, the order in which the ticks arrive is important. In that case you should ALWAYS test using the M1 data for the most accuracy. With M1 data, if a candle has an upper and lower shadow, the algorithm can’t know which one came first, but at least the tick data accuracy will “catch up” every minute.
Winner’s Edge Trading, as seen on: