Efficient Pair Selection For Pair-Trading Strategies
Efficient Pair Selection For Pair-Trading Strategies
Efficient Pair Selection For Pair-Trading Strategies
Contents
Background
Introduction
procedure
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4
4
4
5
8
8
8
9
11
11
11
13
Conclusion
14
References
15
Appendix
15
Background
This research has been conducted with the support of my employer as a effort to automate the process of finding good
stock pairs for quantitative strategies. As the utimate aim is to use this research in a production environment, all the
coding work has been done with the internal language. This a proprietary language so the code is not reproduced here,
however, pseudo-code is provided when appropriate.
Also great attention has been given to efficiency in testing for pairs, this is why we focused on the large-scale implementation of the Euler and Granger procedure.
The universe of stocks in consideration is the TOPIX 500 and we have used the last 7 years of daily prices data.
Introduction
Pair trading is a well-known and popular statistical arbitrage strategy. A pair is simply defined as two stocks that tend to
move together (we need to define this notion more precisely). The strategy consists in trading the spread (a long position
in one of the stocks vs a short position in the other) when a dislocation between the two prices paths is observed.
In the set-up of such a strategy, there are two distincts part: the selection part (what pairs we want to choose) and the
implementation of the trading strategy (when/what sizes we shall trade). We here focus on the selection part. In order
to backtest the selection process however, we need to have some simple trading rules to implement the strategy.
While those pairs can be chosen purely based on fundamental or statistical analysis, we will have a combined approach using both. The ultimate goal is to have an automated procedure to select for pairs that will generate positive PnL on average.
The most significant issues when chosing for pairs in a systematic pair-trading strategy are the testing procedure (how do
we define assets that move together?) and the fact that the problem can get computationnally intense. There are about
50,000 different shares being traded in the world, the naive / brute-force way to handle the problem would require running
50000
= 1, 249, 975, 000 tests!
2
In a first section, we formalize the concept of assets moving together and thereafter propose an efficient way to test
a large number of pairs using the ADF Statistic. In a second section, we integrate some more fundamental factors to
filter pair candidates before statitical testing, with the perspective of seeing what combination add the most value to the
strategy. Lastly, the results will be presented and directions for improvement in the pair selecting model will be discussed.
300
400
8309.T vs 8801.T
+2 Std. Dev
100
200
Spread()
2.0
1.5
2013
2014
8309.T
8801.T
1.0
Normalized Prices
2.5
2015
oct. 15
Implementation Phase
600
200
400
Spread()
2.5
2.0
1.5
2013
2014
8309.T vs 8766.T
+2 Std. Dev
+4 Std. Dev
8309.T
8766.T
1.0
Normalized Prices
dc. 15
800
Selection Phase
nov. 15
2015
oct. 15
Selection Phase
nov. 15
dc. 15
Implementation Phase
1.1
The first idea when one wants to test for assets that moves together is to use correlation of the returns. However,
stongly-correlated returns do not signal much for long-term price paths, and at the same time the absence of correlation
cannot be interpreted as independance. Also, from the implementation of a pair-trading strategy perspective, having a
strong correlation may prevent from divergence and it reduces arbitrage opportunities. Correlation breaks may still be
used for pair strategies, but pratitioners would only look at it for intraday / high-frequency trading. Our perspective is
to set-up a medium term trading strategy (three month backtest period) and correlation will not help identify assets that
move together.
Engle and Granger [2] introduced the concept of cointegration.
A pair of assets Xt and Yt is said to be cointegrated (of order 1) if:
Xt and Yt are integrated of order 1, i.e. they are I(1)
There exists , b such that the linear combination Zt = Yt Xt b is I(0) (i.e. the spread is stationary)
With the a priori knowledge that Xt and Yt are I(1), there are two steps in the Engle and Granger procedure:
1. Estimate the co-integrating relation (e.g. with OLS).
2. Test for stationarity of the residual (which we call spread here).
Cointegration enables to capture shared stochastic trends between several processes. With cointegrated assets, we can
expect mean-reverting behaviour and by going long Xt / short Yt when the spread is positive we should generate positive
PnL once the spread gets back to its long-term level (naturally, we do the opposite trade when Zt is negative).
As an aside, it should be noted that including the intercept term in the definition of cointegration should be considered
carefully. We allowed for an intercept term to keep a general approach, however, this intercept term will not be traded
traded once the pair strategy is set-up. In reality, we only take positions in Xt and Yt . Also, in the implementation phase,
there should some restriction on . For example, 0 implies that Xt and Yt need to be bought or both sold, which
does not make sense for a mean-reverting strategy. We can therefore disregard all the pairs with 0.
1.2
There are quite a few methods to test for stationarity of the spread and we have implemented several of them (see Section
3 for further discussions).
The most popular test in the industry is the Augmented-Dickey-Fuller test. Below is the general set-up for the ADF test:
Zt = 0 + 1 t + Zt1 +
k
P
i Zti + t
i=1
BIC or AIC can be used to get the optimal lag k. After testing on some pairs, we have seen that including lags of order
2, 3, 4.... does not add much value while it significantly increases the complexity of the regression, hence the model with
order 1 has been kept for all pairs.
In addition to the number of lag, the ADF Statistics varies depending on whether an intercept term / a linear trend
are included. By construction, the spread should have zero mean and we could disregard the intercept term in the ADF
formulation. However, we decided to follow what practitioners tend to do and kept that intercept term. Also, there could
be a linear trend in the spread (for example if Xt grows faster than Yt over the long term). In this case, we would need
also need to include a linear trend in the ADF formulation. But this would require to have varying time-dependant hedge
ratio in the implementation phase and this is not something we can afford (once we trade the pair, we do not want to
amend the quantities every day).
Therefore, the ADF formulation we use from now on is:
Zt = + Zt1 + Zt1 + t
4
The null hypothesis of the ADF test is that the spread Zt is a unit-root process, and the alternative is that the process is
a stationary process. More formally, we test:
H0 : = 1
H1 : < 1
3. Find V ar()
4. Compute the T-Stat for
In the Engle and Granger procedure, it should be noted that the usual ADF tabulated values cannot be used directly (as
we test for stationarity on a derived variable - the residuals of an OLS). We use the values as estimated by MacKinnon
[4] with 200 data points. All our tests use the 5% critical value.
Table 1: Critical values for Engle and Granger cointegration test
1pct
5pct
10pct
1.3
-3.95
-3.37
-3.07
In order to avoid looping through the pairs and getting the ADF Statistics one by one, we need to rewrite the test in a
matrix form. Below is the detailed procedure.
Assume we have M pairs, with each stock having N points of price history.
We write X i and Y i the column vectors for the price series of the pair i.
1
1
X1 X12 . . . X1M
Y1 Y12
..
..
X= .
Y = .
1
2
M
XN
XN
. . . XN
YN1 YN2
...
...
Y1M
YNM
YNi
i
XN
i
1
u1
i
.. + ..
.
. bi
1
uiN
..
i0 i 1 i0 i
i
bi = (1 1 ) 1 Y where 1 = .
i
XN
1
..
.
1
i
P i i P iP i
1
N
X
Y
X
Y
k
k
k
k
P
P
P
P
bi = N P(X i )2 (P X i )2 Xki Yki Xki + (Xki )2 Yki
k
k
5
1
ZN
Z12
2
ZN
...
...
Z1M
M
ZN
i
i
i Xki bi
with Zk = Yk
,
by OLS regression of Zt on Zt1 and Zt1
2. Find ,
We start by differentiating and lagging the spread and write the compact form for the regression. For the pair i, we
get:
i
Z2
Z3i
.. ..
. = .
i
i
ZN
ZN
1
Z2i
..
.
i
ZN
1
i
v3
1 i
.. i + ..
.
.
i
vi
1
N
0
0
.
i = (i2 i2 )1 i2 Z i where i2 =
..
i
i
ZN
1
Z2i
..
.
i
ZN
1
1
..
.
1
A1 + A2 + A3
1
i =
A4 + A5 + A6
Det
i
A7 + A8 + A9
where:
X
X
X
X
i
i
i
(Zk1
)2 (
Zk1
)2 )
Zk1
Zki
X
X
X
X
X
i
i
i
i
i
= ((N 2)
Zk1
Zk1
+
Zk1
Zk1
)
Zki
Zk1
X
X
X
X
X
i
i
i
i
i
=(
Zk1
Zk1
Zk1
(Zk1
)2
Zk1
)
Zki
X
X
X
X
X
i
i
i
i
i
= ((N 2)
Zk1
Zk1
+
Zk1
Zk1
)
Zk1
Zki
X
X
X
X
X
i
i
i
i
= ((N 2)
(Zk1
)2
Zk1
Zk1
)
Zk1
Zki
X
X
X
X
X
i
i
i
i
i
=(
Zk1
Zk1
Zk1
(Zk1
)2
Zk1
)
Zki
X
X
X
X
X
X
i
i
i
i
i
i
=(
Zk1
Zk1
Zk1
(Zk1
)2
Zk1
)
Zki
Zk1
X
X
X
X
X
X
i
i
i
i
i
i
=(
Zk1
Zk1
Zk1
(Zk1
)2
Zk1
)
Zki
Zk1
X
X
X
X
i
i
i
i
= ((
Zk1
Zk1
)2 +
(Zk1
)2
(Zk1
)2 )
Zki
A1 = ((N 2)
A2
A3
A4
A5
A6
A7
A8
A9
i
i
i
i
i
i
i
i
i
i
Deti = (N 2)(Zk1
)2 (Zk1
)2 + Zk1
Zk1
Zk1
Zk1
+ Zk1
Zk1
Zk1
Zk1
i
i
i
i
i
i
i
i
(Zk1
)2 (Zk1
)2 (N 2)(Zk1
Zk1
)2 (N 2)(Zk1
Zk1
)2 (Zk1
)2 (Zk1
)2
3. Find V ar()
We have:
i
0
V ar(i ) = (
i )2 (i2 i2 )1 where
i is the MSE of the previous regression
i
In this case, we only care about the standard error of the i coefficient.
V ar(i ) =
(
i )2 =
(
i )2
Deti ((N
1
N 5
2)
P
P
i
i
(Zk1
)2 ( Zk1
)2 )
P
i
i
(Zki i Zk1
i Zk1
i )2
In the above, we have N 5 appering as there are N 2 data points and the OLS removes 3 degrees of freedom.
4. Compute the T-Stat for
The ADF Statistic for pair i is simply
i
V ar(i )
A pair is considered to be cointegrated is the ADF Statistic is below the 5% critical value.
2.1
As mentioned in the introductory section, a fairly long period of time has been used to set-up the pair selection procedure
(last 7 years of data). We call the cointegration window the period over which we test for cointegration, and the backtest
window the period over which we calculate the performance of the chosen pairs. Cointegration windows correspond to
the estimation period, they are followed by the backtest windows (correspond to the trading period). The cointegration
is estimated over a period of 2 years while the backtest length is 3 months. We take rolling windows for each (hence we
have 20 cycles estimation / backtest - the estimation windows overlap).
Below is a simple scheme for one cointegration window / one backtest window and the pseudo-code for the selection and
backtest procedure.
Cointegration and backtest windows
2.2
Before any cointegration test, we need to bucket our universe. We will only look for cointegrated pairs inside each bucket.
This step is required as cointegration tests (at least the ADF test) are not good at identifying spurious relationships (we
will take a closer look at this problem in the section 3).
There should be an economic rationale behind each pair trade - i.e. we want the two legs of the pair to be connected due
to fundamental reasons (having a similar business model is a good starting point).
This step is crucial and the ADF test should only be used to validate a relationship - it may not be the main discriminating
tool.
We have tested several factors / factors combinations to bucket the TOPIX 500 universe:
R
Sectors: stocks in the same sector tend to be affected by the same macro-economic factors. We have used the GICS
level 1 sector classification (10 sectors).
Size: The market tends to be segmented by size: differents investor tend to focus on different size buckets. As a
result, the long term price path of the small cap backet may be quite different from the one of the large cap bucket
and we better keep pairs in the same bucket. The Size factor is proxied by the log of the market capitalisation.
Value: The value factor is a well documented factor in the academic litterature to explain stock returns. A value
stock can be seen as a cheap stock as its Price / Earnings ratio is below comparable stocks. On the other end, a
stock with a relatively high P/E ratio can be seen as expensive.
We can include the Value factor in our screening process ; however, in this case, we would rather chose stocks in
different buckets than in the same bucket. More precisely, we need to go long the stock in the cheap bucket and
short the stock in the expensive bucket so as to maximize the potential for mean-reversion.
We used the Axioma RobustTM Risk Model to proxy this factor.
Volatility / Market Beta: Stocks with dissimilar volatility or beta profile are unlikely to obey to the same long term
price trends. E.g. stocks with high beta / volatility reacts stronger in case of good / bad economic news. We may
want to keep each leg of the pair in the same bucket.
We used the Axioma RobustTM Risk Model to proxy this factor (it could also be proxied by the variance of the
log-returns / covariance of the log returns vs the market).
For the Size, Value and Volatility factors, 3 buckets are made. Stocks are classified according to those buckets.
This model can be extended to other factors, for which we may want to keep pairs in the same or in opposite buckets.
Adding a factor should be justified by some economic intuition.
2.3
Backtesting methodology
Once the pairs candidates have been filtered and the cointegration tests run, there should be a backtesting phase. As
mentioned previously, the backtesting window is 3 months and we have 20 of those in the overall backtest.
Backtesting the pairs requires to have some trading rules to decide when a position is opened and closed. This study
focuses on the selection part so the trading rules are kept as simple as possible. The standard in the industry is to enter
a position when the spread is 2 std. dev. away from the historical mean.
Below are the detailed trading rules:
1. Regress Y on X over the cointegration window to get the and b. Compute the std. dev. of the spread Z
X
2. Calculate the hedge ratio as H = Yt tN with tN the last day of the cointegration period. H represent the dollar
N
amount of X that shall be bought or sold for every 1 dollar in Y
3. Each day t in the backtest window, compute the spread Zt .
If Zt 2 go short Y and long X (at a ratio of H dollar of X for every dollar of Y )
If Zt 2 go long Y and short X
4. Close the position:
when Zt is back to 0. This locks in a profit.
when Zt 4 or Zt 4. In this case, we have a loss
A basic risk management rule has been included, to avoid keeping the position open if prices continue to diverge. Also,
we use the last day of the cointegration window to calculate the hedge ratio instead of the average over the whole window
period in order to have pairs with net notional value close to $0 (as the prices may have deviated quite a lot during the
cointegration window). This constraint ensures we have a portfolio of pairs (fairly) market neutral.
Xi
t1
Assuming there are K days in the backtest window, a pair i is said to be winning if:
K
P
k=1
rtik 0
ri
(r i )
with ri the average daily PnL and (ri ) the volatility of the daily PnL
Lastly, define the daily PnL for the strategy and (daily) Sharpe ratio for the strategy:
rt =
1
M
M
P
rti
S=
i=1
10
r
(r)
3
3.1
The major initial issue was the slowness to run multiple ADF tests. This has been overcome.
In the least-restricted pair candidates universe (with only a sector bucketting), there are around 18,000 pairs to test. Using
a loop over the pre-packaged ADF function this would take 20 mins (similar time was required to run a batch with R).
The matrix implmentation of the ADF test that was described previously enabled to shrink the computation time by a
factor of 40. It takes about 30 secs to test 18,000 pairs and output the ADF statistic, the and b coefficients (we need to
keep them as they are further required to compute the spread).
3.2
Performance analysis
We obtain equivocal results to the main question: what are the best factors to select for good pairs?
Below are the summary backtest results for all factor configurations, the detailed results for the bucketting by sector and
the detailed results for the value factor. In the appendix can also be found the detailed results for the the combinations:
sector and value, sector and size, sector and market beta.
Table 2: Summury for the entire backtest period and different factors configuration
Bucketting
Nb Pairs
Nb Loosing
Nb Winning
Strategy Sharpe
37,737
40,308
2,202
13,005
16,961
10,309
6,413
387
3,540
4,487
10,989
7,036
366
3,775
5,026
3.30%
4.80%
-0.85%
2.45%
6.25%
Sector
Value
Sector and Value
Sector and Size
Sector and Beta
End Backtest
04-Jan-10
02-Apr-10
02-Jul-10
04-Oct-10
04-Jan-11
05-Apr-11
05-Jul-11
05-Oct-11
05-Jan-12
05-Apr-12
05-Jul-12
05-Oct-12
07-Jan-13
05-Apr-13
05-Jul-13
07-Oct-13
07-Jan-14
08-Apr-14
08-Jul-14
08-Oct-14
Period Average
05-Apr-10
02-Jul-10
04-Oct-10
04-Jan-11
04-Apr-11
05-Jul-11
05-Oct-11
05-Jan-12
05-Apr-12
05-Jul-12
05-Oct-12
07-Jan-13
08-Apr-13
05-Jul-13
07-Oct-13
07-Jan-14
07-Apr-14
08-Jul-14
08-Oct-14
08-Jan-15
Nb Pairs
Nb Loosing
Nb Winning
Strategy Sharpe
2,615
2,578
2,416
2,513
1,821
1,458
1,290
1,021
1,266
1,378
1,135
1,090
1,158
1,091
2,505
2,976
2,370
2,995
2,417
1,644
383
392
332
293
429
231
507
177
329
378
408
358
607
454
760
1,146
604
901
929
691
422
526
301
516
631
391
258
305
332
399
314
338
402
561
1,056
1,044
1,067
874
713
539
5%
10%
1%
30%
17%
16%
-19%
15%
-1%
-1%
-18%
2%
-17%
16%
17%
-7%
31%
-2%
-18%
-11%
3%
11
End Backtest
04-Jan-10
02-Apr-10
02-Jul-10
04-Oct-10
04-Jan-11
05-Apr-11
05-Jul-11
05-Oct-11
05-Jan-12
05-Apr-12
05-Jul-12
05-Oct-12
07-Jan-13
05-Apr-13
05-Jul-13
07-Oct-13
07-Jan-14
08-Apr-14
08-Jul-14
08-Oct-14
Period Average
05-Apr-10
02-Jul-10
04-Oct-10
04-Jan-11
04-Apr-11
05-Jul-11
05-Oct-11
05-Jan-12
05-Apr-12
05-Jul-12
05-Oct-12
07-Jan-13
08-Apr-13
05-Jul-13
07-Oct-13
07-Jan-14
07-Apr-14
08-Jul-14
08-Oct-14
08-Jan-15
Nb Pairs
Nb Loosing
Nb Winning
Strategy Sharpe
2,674
2,676
2,911
2,507
1,521
1,305
1,724
1,096
1,329
1,662
1,012
832
1,119
1,098
2,635
2,878
2,147
3,606
3,224
2,352
275
261
368
68
211
99
344
86
146
435
317
79
214
323
440
453
406
581
812
495
392
259
367
317
277
349
147
249
163
364
201
181
169
338
380
609
572
705
453
544
12%
4%
1%
35%
2%
20%
-16%
21%
12%
-10%
-16%
33%
-21%
-1%
1%
18%
7%
3%
-16%
7%
5%
3.3
Regardless of the implementation, the whole pair-trading strategy relies on cointegration to be stable. As for any strategy
using historical data, we test over a past period and expect the model to remain valid in the next. We implicitly assume
that pairs cointegrated in the past period will keep the same behaviour in the next period.
In the light of the generally dispointing results we got in the previous section, this assumption should be challenged and
cointegration stability should be tested. A first indicator that something is going wrong can be seen in the huge variation
of the number of pairs that pass the ADF tests (see Table 3, depending on the period, there may be from 1000 pairs to
3000 pairs with an ADF Statistics below the 5% significance value).
To investigate this further, we proceed to a simple test:
Take a 4-year period, say Jan 2011 to Jan 2015 and divide it into two.
Take a large number of pairs, says all the pairs given by bucketting by sector only - this gives 16,874 pairs.
Check for cointegration separately in each one of the two periods.
Calculate P (C1ADF C2ADF ) and P (C1ADF )P (C2ADF ) with CkADF defined as the pair is cointegrated during period
k as per 5% significance in the ADF test.
For our sample period on the TOPIX 500, we get:
P (C1ADF ) =
4251
16874
= 25.19%
P (C2ADF ) =
2310
16874
= 13.69%
P (C1ADF C2ADF ) =
579
16874
= 3.43%
As a result:
P (C1ADF )P (C2ADF ) = 3.45% ' P (C1ADF C2ADF )
13
When looking at cointegration, we actually want to check that the spread is a stationary process. For this purpose,it would make sense to use a test with stationarity as the null hypothesis. The KPSS test is designed in this way.
It starts with the model:
zt = 0 Dt + t + ut
t = t1 + vt
P (C2P P ) = 4.78%
Again, those results show that cointegration for stock pairs is not persistent through time.
Conclusion
The main contribution of this study is the re-writing of the Engle and Granger procedure in a form that enables to test
multiple pairs very rapidly. The procedure developped here also enabled to include easily some fundamental factors to
guide the search for profitable pairs. In the optimized configuration (sector and market beta bucketting), the strategy
generates a Sharpe ratio close to 1.
However, the tests that have been further made showed that cointegration may be an unreliable metric in a mean-reverting
strategy. This leaves the door open to other ways to approach mean-reversion (in this perspective, we can think about
modelling the spread with a Vector-Error-Correction-Model - VECM).
14
References
[1] Dickey, D. A. and Fuller, W. A Distribution of the Estimators for Autoregressive Time Series with a Unit Root. Journal
of the American Statistical Association. 1979
[2] Engle, Robert and Granger, C.W.J. Co-Integration and Error Correction: Representation, Estimation, and Testing.
Econometrica, Vol. 55. 1987.
[3] Wang, Jieren. A high performance pair trading application. Parallel & Distributed Processing. IEEE. 2009
[4] MacKinnon, J. G. Critical values for cointegration tests. Chapter 13 in Long-Run Economic Relationships: Readings
in Cointegration. Oxford Oxford University Press. 1991.
[5] Sj
o, Bo. Testing for Unit Roots and Cointegration, Aug 2008.
[6] Zivot, Eric. Unit Root Tests. Chapter 4, Lecture Notes on Time Series Econometrics. 2006.
Appendix
Table 5: Bucketting by Sector and Value Factor
Start Backtest
End Backtest
04-Jan-10
02-Apr-10
02-Jul-10
04-Oct-10
04-Jan-11
05-Apr-11
05-Jul-11
05-Oct-11
05-Jan-12
05-Apr-12
05-Jul-12
05-Oct-12
07-Jan-13
05-Apr-13
05-Jul-13
07-Oct-13
07-Jan-14
08-Apr-14
08-Jul-14
08-Oct-14
Period Average
05-Apr-10
02-Jul-10
04-Oct-10
04-Jan-11
04-Apr-11
05-Jul-11
05-Oct-11
05-Jan-12
05-Apr-12
05-Jul-12
05-Oct-12
07-Jan-13
08-Apr-13
05-Jul-13
07-Oct-13
07-Jan-14
07-Apr-14
08-Jul-14
08-Oct-14
08-Jan-15
Nb Pairs
Nb Loosing
Nb Winning
Strategy Sharpe
169
205
141
125
68
47
61
55
78
81
48
29
73
48
147
221
153
212
140
101
22
15
20
2
17
4
17
3
5
19
13
4
20
12
22
45
31
47
47
22
29
15
15
22
11
18
5
11
8
14
8
9
7
14
15
40
51
33
20
21
0%
-8%
4%
29%
-14%
20%
-19%
21%
11%
-6%
-10%
10%
-35%
6%
-9%
7%
9%
-8%
-21%
-4%
-1%
15
End Backtest
04-Jan-10
02-Apr-10
02-Jul-10
04-Oct-10
04-Jan-11
05-Apr-11
05-Jul-11
05-Oct-11
05-Jan-12
05-Apr-12
05-Jul-12
05-Oct-12
07-Jan-13
05-Apr-13
05-Jul-13
07-Oct-13
07-Jan-14
08-Apr-14
08-Jul-14
08-Oct-14
Period Average
05-Apr-10
02-Jul-10
04-Oct-10
04-Jan-11
04-Apr-11
05-Jul-11
05-Oct-11
05-Jan-12
05-Apr-12
05-Jul-12
05-Oct-12
07-Jan-13
08-Apr-13
05-Jul-13
07-Oct-13
07-Jan-14
07-Apr-14
08-Jul-14
08-Oct-14
08-Jan-15
Nb Pairs
Nb Loosing
Nb Winning
Strategy Sharpe
916
874
871
857
622
491
414
321
458
477
399
391
399
333
900
1,044
850
1,043
802
543
142
127
120
94
155
68
161
64
114
113
151
127
215
134
289
399
222
310
308
227
141
182
112
174
218
140
87
92
117
144
112
123
140
177
349
367
374
316
233
177
-1%
12%
-2%
32%
15%
20%
-19%
11%
-2%
0%
-19%
5%
-17%
10%
11%
-6%
26%
1%
-18%
-10%
2%
Nb Winning
181
243
160
253
287
134
110
152
191
222
171
188
223
238
440
421
498
379
302
233
Strategy Sharpe
3%
8%
1%
39%
16%
22%
-15%
19%
-1%
4%
-16%
7%
-13%
10%
18%
0%
41%
6%
-19%
-5%
6%
End Backtest
05-Apr-10
02-Jul-10
04-Oct-10
04-Jan-11
04-Apr-11
05-Jul-11
05-Oct-11
05-Jan-12
05-Apr-12
05-Jul-12
05-Oct-12
07-Jan-13
08-Apr-13
05-Jul-13
07-Oct-13
07-Jan-14
07-Apr-14
08-Jul-14
08-Oct-14
08-Jan-15
Nb Pairs
1,242
1,219
1,198
1,138
771
543
502
475
677
746
618
604
626
466
1,030
1,126
1,009
1,250
1,025
696
Nb Loosing
189
191
174
139
196
76
184
80
171
199
217
183
319
187
314
430
210
386
371
271
16