**Note: I back test this 100% incorrectly. I also wish to add scaling of the independent variable per the hedge ratio correctly. I am leaning towards working in share prices and will update accordingly. What is correct is all the statistical testing, however, the back test is 100% incorrect. I also run the johansen test on the stationary series obtained from the hedge ratio through linear regression, this is an unnecessary step as we can use the hedge ratio from linear regression to scale the independent variable position sizes. I will get to the revisions when I can.**

A statistical arbitrage opportunity exists between a cointegrated pair when the price between the pair moves out of equilibrium. This short term disequilibrium creates a trading opportunity where one enters a short position in one series with an equal weighted long position in the other series. The aim is that two series will eventually converge and the trade be closed out with a profit.

Essentially between two cointegrated pairs, the spread is mean reverting. The time of mean reversion is one factor to consider. How long will it take for the period of disequilibrium to revert to the mean and will it ever revert to the mean? Co-integration can and does break down. So one question to answer would be: How do I know if the trade will converge, when is it time to exit with a loss?

The half life of mean reversion which is calculated on the spread can help answer this question. If historically a pair of stocks A and B have shown to mean revert within ‘n’ days, if this threshold is exceeded by a long margin OR the trade meets a pre determined stop loss % OR a stop loss measured in standard deviations. The trade can be closed out with a loss.

A pair of stocks can have a linear trend however, we can form a stationary series by estimating the hedge ratio with ordinary least squares regression. In practice we run 2x ols regressions. One with Stock A as the independent variable and one with Stock B as the independent variable. We then pick the regression with the most significant t-statistic.

First lets get a feel for the data and plot the closing prices of EWA and EWC. I want to use these two series and time period to see if we can replicate Ernie Chans study in his book: Algorithmic Trading: Winning Strategies and Their Rationale.

EWA and EWC visually appear to have a high correlation. The pearson correlation coefficient is 0.9564724. If two series are correlated it essentially means they move in the same direction most of the time. However, it tells you nothing about the nature of the spread between two series and if they will even converge (mean revert) or continue to diverge. With co integration, we form a linear relationship between Stock A and Stock B where the distance between A and B is somewhat fixed and would be estimated to mean revert. In essence, we measure the ‘distance’ between A and B in terms of standard deviations and a trade would be taken when ‘n’ standard deviations is met. Also a stop loss may be placed outwith the normal sigma range ensuring the trade is exited if the relationship breaks down.

A scatter plot shows the variation around the ols regression fit:

# Scatter plot of price series # plot x,y plot(df$EWA_Close, df$EWC_Close, main="Scatterplot EWA and EWC With OLS Regression Fit", xlab="EWA Close", ylab="EWC Close ", pch=19) abline(lm(df$EWC_Close~df$EWA_Close), col="red") # regression line (y~x)

We perform ordinary least squares regression using EWA as the independent variable (x) and EWC as the dependent variable (y). Note we set the intercept to 0 in so that the regression fit provides only the hedge ratio between EWA and EWC.

# Ordinary least squares regression # EWA independant ols <- lm(df$EWC_Close~df$EWA_Close+0) # regression line (y~x) summary(ols) beta <- coef(ols)[1] resid <- as.data.frame(df$EWC_Close - beta * df$EWA_Close) resid <- data.frame(resid,"Date" = df$Date) colnames(resid)[1] <- "residuals" plot(resid$Date,resid$residual, col = 'navyblue', xlab = 'Date', ylab = 'residuals', type="l", main=" EWA,EWC Residuals - EWA Independant Variable - No Intercept",cex.main=1)

In the above, we used ordinary least squares regression to estimate the hedge ratio (beta) and we form the spread by subtracting EWC close – the beta * EWA close. Note we use the closing prices and not the daily returns. By using the closing prices we form the hedge ratio that would provide us with the number of shares to go long/short for each ETF.

The augmented dickey fuller test is used to determine if a pair is non stationary or stationary (trending or non trending). The purpose of the test is to reject the null hypothesis (unit root). If we have unit root it means we have a non-stationary series. We wish to have a stationary series so that we may trade efficiency using mean reversion.

# urca ADF test library(urca) summary(ur.df(resid$residual, type = "drift", lags = 1)) ############################################### # Augmented Dickey-Fuller Test Unit Root Test # ############################################### Test regression drift Call: lm(formula = z.diff ~ z.lag.1 + 1 + z.diff.lag) Residuals: Min 1Q Median 3Q Max -1.17542 -0.17346 -0.00447 0.16173 1.27800 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.0006657 0.0072252 0.092 0.926599 z.lag.1 -0.0176743 0.0053087 -3.329 0.000892 *** z.diff.lag -0.1751998 0.0254683 -6.879 8.82e-12 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.2781 on 1495 degrees of freedom Multiple R-squared: 0.04101, Adjusted R-squared: 0.03973 F-statistic: 31.97 on 2 and 1495 DF, p-value: 2.548e-14 Value of test-statistic is: -3.3293 5.5752 Critical values for test statistics: 1pct 5pct 10pct tau2 -3.43 -2.86 -2.57 phi1 6.43 4.59 3.78

The ADF t-statistic is -3.3293 which is lower than the critical value of 5% ( below -2.86). We may reject the unit root hypothesis with 95% certainty that we have a non-stationary relationship. If we flip the independent variable from EWA to EWC and re-run OLS and the augmented dickey fuller test we obtain a t-statistic of -3.3194 which is lower than the 5% critical value of -2.86 which means we have 95% certainty the series is non-stationary. In this case we choose the most negative / statistically significant test-statistic of EWA as independent variable.

Next to determine the number of shares we need to purchase / short we may use the Johansen test. The eigenvectors determine the weights for each ETF.

# Johansen test # Eigen vectors are the number of shares # Test is cointegration test coRes=ca.jo(data.frame(df$EWA_Close, df$EWC_Close),type="trace",K=2,ecdet="none", spec="longrun") summary(coRes) slot(coRes,"V") # The matrix of eigenvectors, normalised with respect to the first variable. ###################### # Johansen-Procedure # ###################### Test type: trace statistic , with linear trend Eigenvalues (lambda): [1] 0.010146673 0.002667727 Values of teststatistic and critical values of test: test 10pct 5pct 1pct r <= 1 | 4.00 6.50 8.18 11.65 r = 0 | 19.28 15.66 17.95 23.52 Eigenvectors, normalised to first column: (These are the cointegration relations) df.EWA_Close.l2 df.EWC_Close.l2 df.EWA_Close.l2 1.0000000 1.0000000 df.EWC_Close.l2 -0.7772904 0.6080793 Weights W: (This is the loading matrix) df.EWA_Close.l2 df.EWC_Close.l2 df.EWA_Close.d 0.004309716 -0.003029255 df.EWC_Close.d 0.028971878 -0.002649008

The eigenvectors form the weights for each ETF. Simply put, if allocate $10,000 to each ETF. Purchase 10,000 * 1 / EWA close to determine the number of shares to long EWA. When shorting EWC, purchase 10,000 * -0.7772904 / EWC Close. Thus our weightings form a stationary portfolio.

In order to determine how long the cointegrated relationship takes to revert to the mean we can perform a linear regression of the 1 lagged daily differences minus the mean of the 1 lagged differences. Also known at the half life of mean reversion.

# Calculate half life of mean reversion (residuals) # Calculate yt-1 and (yt-1-yt) # pull residuals to a vector residuals <- c(resid$residual) y.lag <- c(residuals[2:length(residuals)], 0) # Set vector to lag -1 day y.lag <- y.lag[1:length(y.lag)-1] # As shifted vector by -1, remove anomalous element at end of vector residuals <- residuals[1:length(residuals)-1] # Make vector same length as vector y.lag y.diff <- residuals - y.lag # Subtract todays close from yesterdays close y.diff <- y.diff [1:length(y.diff)-1] # Make vector same length as vector y.lag prev.y.mean <- y.lag - mean(y.lag) # Subtract yesterdays close from the mean of lagged differences prev.y.mean <- prev.y.mean [1:length(prev.y.mean )-1] # Make vector same length as vector y.lag final.df <- data.frame(y.diff,prev.y.mean) # Create final data frame # Linear Regression With Intercept result <- lm(y.diff ~ prev.y.mean, data = final.df) half_life <- -log(2)/coef(result)[2] half_life # Linear Regression With No Intercept result = lm(y.diff ~ prev.y.mean + 0, data = final.df) half_life1 = -log(2)/coef(result)[1] half_life1 # Print general linear regression statistics summary(result)

The half life of mean reversion is 31.62271 days.

We will now form a trading strategy and calculate a rolling z-score over the residuals with a look back set equal to obtained half life.

# Make z-score of residuals # Set look back equal to half life colnames(resid) resid$mean <- SMA(resid[,"residuals"], round(half_life)) resid$stdev <- runSD(resid[,"residuals"], n = round(half_life), sample = TRUE, cumulative = FALSE) resid$zscore <- apply(resid[,c('residuals', 'mean','stdev')], 1, function(x) { (x[1]-x[2]) / x[3] }) # Convert all NA to 0 resid[is.na(resid)] <- 0 # Place residuals data frame to original (df) df <- data.frame(df,resid) # Plot z-score of residuals plot(resid$zscore,type="l", main="Rolling Z-Score - Look Back Set Equal To Half Life")

Note the range of standard deviations for the spread is between 3,-3.

We can now perform a back test to see how well it would have done. For the sake of simplicity we will not include optimization on in-sample and perform out of sample testing. We will also not consider share amounts of the weightings or commissions.

################### # Back Test Script ################### # Create Trading Logic # When z-score crossed below 0 buy EWA and sell short EWC per eigenvectors df$enter.long <- ifelse(df$zscore <= 0, 1,0) # for long EWA below mean df$exit.long <- ifelse(df$zscore == 0, 1,0) # for long EWA below mean df$enter.short <- ifelse(df$zscore <= 0 , -1,0) # for short EWC below mean df$exit.short <- ifelse(df$zscore == 0 , -1,0) # for short EWC below mean df$enter.short.1 <- ifelse(df$zscore >= 0, -1,0) # for short EWA above mean df$exit.short.1 <- ifelse(df$zscore == 0, -1,0) # for short EWA above mean df$enter.long.1 <- ifelse(df$zscore >= 0 , 1,0) # for long EWC above mean df$exit.long.1 <- ifelse(df$zscore == 0 , 1,0) # for long EWC above mean # Calculate close to close returns df$ewa.clret <- ROC(df$EWA_Close, type = c("discrete")) df$ewa.clret[1] <- 0 df$ewc.clret <- ROC(df$EWC_Close, type = c("discrete")) df$ewc.clret[1] <- 0 # Long EWA below mean df <- df %>% dplyr::mutate(ewa.long = ifelse(enter.long == 1, 1, ifelse(exit.long == 1, 0, 0))) # Short EWC above mean df <- df %>% dplyr::mutate(ewc.short = ifelse(enter.short == -1, 1, ifelse(exit.short == -1, 0, 0))) # Short EWA above mean df <- df %>% dplyr::mutate(ewa.short = ifelse(enter.short.1 == -1, 1, ifelse(exit.short.1 == 1, 0, 0))) # Long EWC above mean df <- df %>% dplyr::mutate(ewc.long = ifelse(enter.long.1 == 1, 1, ifelse(exit.long.1 == -1, 0, 0))) # lag signal by one forward day to signal entry next day df$ewa.long <- lag(df$ewa.long,1) # Note k=1 implies a move *forward* df$ewc.short <- lag(df$ewc.short,1) # Note k=1 implies a move *forward* df$ewa.short <- lag(df$ewa.short,1) # Note k=1 implies a move *forward* df$ewc.long <- lag(df$ewc.long,1) # Note k=1 implies a move *forward* # Calculate equity curves df$equity.long <- apply(df[,c('ewa.long', 'ewa.clret')], 1, function(x) { x[1]*x[2] }) df$equity.short <- apply(df[,c('ewc.short', 'ewc.clret')], 1, function(x) { x[1]*x[2]}) df$equity.long.1 <- apply(df[,c('ewa.short', 'ewa.clret')], 1, function(x) { x[1]*x[2] }) df$equity.short.1 <- apply(df[,c('ewc.long', 'ewc.clret')], 1, function(x) { x[1]*x[2]}) # Combine signals df$combined <- apply(df[,c('equity.long', 'equity.short','equity.long.1','equity.short.1')], 1, function(x) { x[1]+x[2]+x[3]+x[4] }) # Pull select columns from data frame to make XTS whilst retaining format xts1 = xts(df$equity.long, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) xts2 = xts(df$equity.short, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) xts3 = xts(df$equity.long.1, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) xts4 = xts(df$equity.short.1, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) xts5 = xts(df$combined, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) # Join XTS together compare <- cbind(xts1,xts2,xts3,xts4,xts5) # Use the PerformanceAnalytics package for trade statistics colnames(compare) <- c("EWA Long","EWC Short","EWA Short","EWC Long","Combined") charts.PerformanceSummary(compare,main="EWC,EWA Pair", wealth.index=TRUE, colorset=rainbow12equal) performance.table <- rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare),table.DownsideRisk(compare)) drawdown.table <- rbind(table.Drawdowns(compare)) #dev.off() #logRets <- log(cumprod(1+compare)) #chart.TimeSeries(logRets, legend.loc='topleft', colorset=rainbow12equal) print(performance.table) print(drawdown.table)

One thing to note is that for the whole sample period from April 26th 2006 to April 9th 2012. We use the exact same hedge ratio to form the stationary series. This is likely not the correct procedure, we also use the full sample to record the half life of mean reversion. As we expect the series A & B to change over time I believe it would be better practice to use a fixed rolling window to estimate the hedge ratio and mean reversion half life. When a new signal is generated it will be the hedge ratio/half life derived from the rolling look back period.

It should also be noted that we do not quite match Ernie Chans test from his book, I am using alpha vantage as a source of price data and we have a difference in price between Ernies EWA and the alpha vantage EWA. There is also differences in the urca Johansen test and the one Ernie uses in the jplv7 package. I will attempt to address these issues in upcoming posts.

This serves as a good introduction for me and others to the topic of pairs trading.

Full R code below. # Test for cointegration # Download daily price data from Alpha Vantage API # Plot two series # Perform Linear Regression # Plot Residuals # ADF test for unit root # Half life of mean reversion # zscore look back equal to half life # back test long stock A, short Stock B # Andrew Bannerman 11.1.2017 require(alphavantager) require(urca) require(lattice) require(lubridate) require(dplyr) require(magrittr) require(TTR) require(zoo) require(data.table) require(xts) require(PerformanceAnalytics) # Download price series from Alpha Vantage # install.packages('alphavantager') # Key #https://www.alphavantage.co/query?function=TIME_SERIES_DAILY_ADJUSTED&symbol=EWA&apikey=6RSYX9BPXKZVXUS9&outputsize=full&datatype=csv av_api_key("your_api_key") print(av_api_key()) #EWC <- av_get(symbol = "GLD", av_fun = "TIME_SERIES_DAILY_ADJUSTED", outputsize = "full") #EWA <- av_get(symbol = "SLV", av_fun = "TIME_SERIES_DAILY_ADJUSTED", outputsize = "full") #bit coin EWC <- av_get(symbol = "BTC", av_fun = "DIGITAL_CURRENCY_DAILY", outputsize = "full", market="CNY") EWA <- read.csv("D:/R Projects/Final Scripts/Cointegration Scripts/data/daily_adjusted_EWA.csv", header=TRUE,stringsAsFactors=FALSE) EWC <- read.csv("D:/R Projects/Final Scripts/Cointegration Scripts/data/daily_adjusted_EWC.csv", header=TRUE,stringsAsFactors=FALSE) str(EWA) # Convert timestamp to Date format EWA$timestamp <- ymd(EWA$timestamp) EWC$timestamp <- ymd(EWC$timestamp) # Sort EWA <- arrange(EWA,timestamp) EWC <- arrange(EWC,timestamp) # Merge Data Frames By date df <- merge(EWA,EWC, by='timestamp') head(df) # Rename Columns colnames(df)[1] <- "Date" colnames(df)[6] <- "EWA_Close" colnames(df)[14] <- "EWC_Close" # Subset by date df <- subset(df, Date >= as.POSIXct("2006-04-26") & Date <= as.POSIXct("2012-04-09")) # Scatter plot of price series # plot x,y plot(df$EWA_Close, df$EWC_Close, main="Scatterplot EWA and EWC With OLS Regression Fit", xlab="EWA Close", ylab="EWC Close ", pch=19) abline(lm(df$EWC_Close~df$EWA_Close), col="red") # regression line (y~x) # Line plot of series require(ggplot2) require(scales) ggplot(df, aes(Date)) + theme_classic()+ geom_line(aes(y=EWA_Close), colour="red") + geom_line(aes(y=EWC_Close), colour="blue") + scale_x_date(breaks = date_breaks("years"), labels = date_format("%Y"))+ ggtitle("EWC, EWC Close", subtitle = "2006-04-26 to 2012-04-09") + labs(x="Year",y="EWA,EWC Close")+ theme(plot.title = element_text(hjust=0.5),plot.subtitle =element_text(hjust=0.5))+ annotate("text", label = "EWA", x = as.Date("2006-04-26"), y = 22, color = "blue")+ annotate("text", label = "EWC", x = as.Date("2006-04-26"), y = 15, color = "red") # Find corrlation coefficient # method = pearson, spearman, kendall EWA.EWC.cor <- cor(df$EWA_Close, df$EWC_Close, use="complete.obs", method="pearson") # Ordinary least squares regression # EWA independant ols <- lm(df$EWC_Close~df$EWA_Close+0) # regression line (y~x) summary(ols) beta <- coef(ols)[1] resid <- as.data.frame(df$EWC_Close - beta * df$EWA_Close) resid <- data.frame(resid,"Date" = df$Date) colnames(resid)[1] <- "residuals" plot(resid$Date,resid$residual, col = 'navyblue', xlab = 'Date', ylab = 'residuals', type="l", main=" EWA,EWC Residuals - EWA Independant Variable - No Intercept",cex.main=1) # urca ADF test library(urca) summary(ur.df(resid$residual, type = "drift", lags = 1)) # Johansen test # Eigen vectors are the number of shares # Test is cointegration test coRes=ca.jo(data.frame(df$EWA_Close, df$EWC_Close),type="trace",K=2,ecdet="none", spec="longrun") summary(coRes) slot(coRes,"V") # The matrix of eigenvectors, normalised with respect to the first variable. slot(coRes,"Vorg") # The matrix of eigenvectors, such that \hat V'S_{kk}\hat V = I. # Calculate half life of mean reversion (residuals) # Calculate yt-1 and (yt-1-yt) # pull residuals to a vector residuals <- c(resid$residual) y.lag <- c(residuals[2:length(residuals)], 0) # Set vector to lag -1 day y.lag <- y.lag[1:length(y.lag)-1] # As shifted vector by -1, remove anomalous element at end of vector residuals <- residuals[1:length(residuals)-1] # Make vector same length as vector y.lag y.diff <- residuals - y.lag # Subtract todays close from yesterdays close y.diff <- y.diff [1:length(y.diff)-1] # Make vector same length as vector y.lag prev.y.mean <- y.lag - mean(y.lag) # Subtract yesterdays close from the mean of lagged differences prev.y.mean <- prev.y.mean [1:length(prev.y.mean )-1] # Make vector same length as vector y.lag final.df <- data.frame(y.diff,prev.y.mean) # Create final data frame # Linear Regression With Intercept result <- lm(y.diff ~ prev.y.mean, data = final.df) half_life <- -log(2)/coef(result)[2] half_life # Linear Regression With No Intercept result = lm(y.diff ~ prev.y.mean + 0, data = final.df) half_life1 = -log(2)/coef(result)[1] half_life1 # Print general linear regression statistics summary(result) # Make z-score of residuals # Set look back equal to half life colnames(resid) half_life = 31 resid$mean <- SMA(resid[,"residuals"], round(half_life)) resid$stdev <- runSD(resid[,"residuals"], n = round(half_life), sample = TRUE, cumulative = FALSE) resid$zscore <- apply(resid[,c('residuals', 'mean','stdev')], 1, function(x) { (x[1]-x[2]) / x[3] }) # Convert all NA to 0 resid[is.na(resid)] <- 0 # Place residuals data frame to original (df) df <- data.frame(df,resid) # Plot z-score of residuals plot(resid$zscore,type="l", main="Rolling Z-Score - Look Back Set Equal To Half Life", col="darkblue") ################### # Back Test Script ################### # Create Trading Logic # When z-score crossed below 0 buy EWA and sell short EWC per eigenvectors df$enter.long <- ifelse(df$zscore <= -0.7, 1,0) # for long EWA below mean df$exit.long <- ifelse(df$zscore == 0, 1,0) # for long EWA below mean df$enter.short <- ifelse(df$zscore <= -0.7 , -1,0) # for short EWC below mean df$exit.short <- ifelse(df$zscore == 0 , -1,0) # for short EWC below mean df$enter.short.1 <- ifelse(df$zscore >= 0.7, -1,0) # for short EWA above mean df$exit.short.1 <- ifelse(df$zscore == 0, -1,0) # for short EWA above mean df$enter.long.1 <- ifelse(df$zscore >= 0.7 , 1,0) # for long EWC above mean df$exit.long.1 <- ifelse(df$zscore == 0 , 1,0) # for long EWC above mean # Calculate close to close returns df$ewa.clret <- ROC(df$EWA_Close, type = c("discrete")) df$ewa.clret[1] <- 0 df$ewc.clret <- ROC(df$EWC_Close, type = c("discrete")) df$ewc.clret[1] <- 0 # Long EWA below mean df <- df %>% dplyr::mutate(ewa.long = ifelse(enter.long == 1, 1, ifelse(exit.long == 1, 0, 0))) # Short EWC above mean df <- df %>% dplyr::mutate(ewc.short = ifelse(enter.short == -1, 1, ifelse(exit.short == -1, 0, 0))) # Short EWA above mean df <- df %>% dplyr::mutate(ewa.short = ifelse(enter.short.1 == -1, 1, ifelse(exit.short.1 == 1, 0, 0))) # Long EWC above mean df <- df %>% dplyr::mutate(ewc.long = ifelse(enter.long.1 == 1, 1, ifelse(exit.long.1 == -1, 0, 0))) # lag signal by one forward day to signal entry next day df$ewa.long <- lag(df$ewa.long,1) # Note k=1 implies a move *forward* df$ewc.short <- lag(df$ewc.short,1) # Note k=1 implies a move *forward* df$ewa.short <- lag(df$ewa.short,1) # Note k=1 implies a move *forward* df$ewc.long <- lag(df$ewc.long,1) # Note k=1 implies a move *forward* # Calculate equity curves df$equity.long <- apply(df[,c('ewa.long', 'ewa.clret')], 1, function(x) { x[1]*x[2] }) df$equity.short <- apply(df[,c('ewc.short', 'ewc.clret')], 1, function(x) { x[1]*x[2]}) df$equity.long.1 <- apply(df[,c('ewa.short', 'ewa.clret')], 1, function(x) { x[1]*x[2] }) df$equity.short.1 <- apply(df[,c('ewc.long', 'ewc.clret')], 1, function(x) { x[1]*x[2]}) tail(df,500) # Combine signals df$combined <- apply(df[,c('equity.long', 'equity.short','equity.long.1','equity.short.1')], 1, function(x) { x[1]+x[2]+x[3]+x[4] }) # Pull select columns from data frame to make XTS whilst retaining format xts1 = xts(df$equity.long, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) xts2 = xts(df$equity.short, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) xts3 = xts(df$equity.long.1, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) xts4 = xts(df$equity.short.1, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) xts5 = xts(df$combined, order.by=as.POSIXct(df$Date, format="%Y-%m-%d")) # Join XTS together compare <- cbind(xts1,xts2,xts3,xts4,xts5) # Use the PerformanceAnalytics package for trade statistics colnames(compare) <- c("EWA Long","EWC Short","EWA Short","EWC Long","Combined") charts.PerformanceSummary(compare,main="EWC,EWA Pair", wealth.index=TRUE, colorset=rainbow12equal) performance.table <- rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare),table.DownsideRisk(compare)) drawdown.table <- rbind(table.Drawdowns(compare)) #dev.off() #logRets <- log(cumprod(1+compare)) #chart.TimeSeries(logRets, legend.loc='topleft', colorset=rainbow12equal) print(performance.table) print(drawdown.table)