Momentum Strategy – Boosting Returns – VFINX / VFITX

In this post we will look at a strategy that is 100% invested in the S&P500 and during adverse conditions, is 100% invested in an intermediate term treasury bond. The two funds used for the strategy:

Vanguard 500 Index Fund Investor Class(VFINX)
Vanguard Intermediate-Term Treasury Fund Investor Shares(VFITX)

The strategy is as follows using MONTHLY bars:
1. Long VFINX(SP500) when the price is over the 189 day moving average or monthly equivalent (9sma)
2. When VFINX(SP500) is below monthly 9sma, sell 100% of VFINX(SP500) at next months open and buy VFITX(Bonds)
3. When VFINX(SP500) is above the monthly 9sma, sell 100% of VFITX(Bonds) and 100% re-long VFINX(SP500)

Essentially the strategy is long the S&P500 when its above the 9sma (daily, 189sma) and 100% in intermediate bonds when below the 9sma (daily, 189sma).

The color red denotes times when VFINX(SP500) is below the monthly 9sma (daily, 189sma) and when the strategy would be 100% long bonds.

below9sma

The returns for this strategy since 1991 inception:

Rplot122

vfinx.vfitx.monthly

From the backtest we see an annualized return of 12.4% with a sharpe ratio of 1.21. Maximum draw down is 15% which means we may explore using leverage to boost gains whilst maintaining the same risk profile of buy and hold (VFINX).

We may boost returns by longing a 3x leveraged S&P500 ETF. This can be achieved with ProShares UltraPro ETF(UPRO, inception, 2009). First I want to check to see if we can model UPRO data prior to 2009.

Rplot117

We see from plotting the regression fit of UPRO and 3x SP500 daily returns that the fit is not perfect. There is a high degree of correlation. However, the % return tracking is not exact.

Rplot118

We have almost perfect tracking prior to June 2012. However, there is a notable divergence between the 3x S&P500 daily returns and UPRO.

When we model UPRO prior to 2009 by simply multiplying the S&P500 returns by 3 we know it will be largely theoretical.

Lets back test swapping VFINX with our theoretical UPRO prior to 2009.

Rplot119

upro

Maximum draw down is 45% which is significantly lower than buy and hold UPRO (92%). Essentially we enjoy staggering returns with almost half of the draw down. The 189 daily sma or monthly 9sma acts a 'low pass filter' and avoids the catastrophic bear markets post 1999 and 2007.

Rplot120

cum.ret

We see that cumulative returns are 1276.56%.

This is theoretical but staggering results.

# Get First And Last Date
last.date <- tail(df$Date, n = 1)
first.date <- head(df$Date, n = 1)
## Find Time Difference
time <- last.date - first.date
years.between <- time/352
years.between <- as.numeric(years.between, units="days")   # Extract numerical value from time difference 'Time difference of 2837.208 days'
years.between

10 thousand dollars grows to 12.7 million dollars over a time period of 26.98 years. Not bad for for a maximum draw down of 45% with 3x leverage.

In closing, the 189 daily sma or 9sma monthly equivalent acts as a low pass filter. The aim of this filter is to reduce over all volatility, provide less of a draw down and avoid negative compounding periods. We see from the results that this has been achieved.

Reader Question:
What happens if we optimize the monthly bar SMA look back higher / lower than the tested 9sma?

Full back test R code:


# Long Term S&P500 - Switch To Bonds
# Andrew Bannerman 10.8.2017

require(lubridate)
require(dplyr)
require(magrittr)
require(TTR)
require(zoo)
require(data.table)
require(xts)
require(PerformanceAnalytics)

# Data path
data.dir <- "C:/R Projects/Final Scripts/Vanguard Long Term Strategies"
data.read.VFINX <- paste(data.dir,"VFINX.csv",sep="/")
data.read.VFITX <- paste(data.dir,"VFITX.csv",sep="/")

# Read data
read.VFINX <- read.csv(data.read.VFINX,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE)
read.VFITX <- read.csv(data.read.VFITX,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE)
#read.VFITX <- read.VFITX[-nrow(read.VFITX),] 

# Convert Values To Numeric
cols <-c(2:7)
read.VFINX[,cols] %<>% lapply(function(x) as.numeric(as.character(x)))
read.VFITX[,cols] %<>% lapply(function(x) as.numeric(as.character(x)))

#Convert Date Column [1]
read.VFINX$Date <- ymd(read.VFINX$Date)
read.VFITX$Date <- ymd(read.VFITX$Date)

# Merge two data sets
df <- full_join(read.VFINX, read.VFITX, by = c("Date" = "Date"))

# Rename Columns
colnames(df)[1] <-"Date"
colnames(df)[2] <-"VFINX.Open"
colnames(df)[3] <-"VFINX.High"
colnames(df)[4] <-"VFINX.Low"
colnames(df)[5] <-"VFINX.Close"
colnames(df)[6] <-"VFINX.Adj.Close"
colnames(df)[7] <-"VFINX.Volume"
colnames(df)[8] <-"VFITX.Open"
colnames(df)[9] <-"VFITX.High"
colnames(df)[10] <-"VFITX.Low"
colnames(df)[11] <-"VFITX.Close"
colnames(df)[12] <-"VFITX.Adj.Close"
colnames(df)[13] <-"VFITX.Volume"

# Convert all NA to 0
df[is.na(df)] <- 0

# Use TTR package to create rolling SMA n day moving average
# Create function and loop in order to repeat the desired number of SMAs for example 2:30
getSMA <- function(numdays) {
  function(df) {
    SMA(df[,"VFINX.Adj.Close"], numdays)    # Calls TTR package to create SMA
  }
}
# Create a matrix to put the SMAs in
sma.matrix <- matrix(nrow=nrow(df), ncol=0)

# Loop for filling it
for (i in 2:400) {
  sma.matrix <- cbind(sma.matrix, getSMA(i)(df))
}

# Rename columns
colnames(sma.matrix) <- sapply(2:400, function(n)paste("adj.close.sma.n", n, sep=""))

# Bind to existing dataframe
df <-  cbind(df, sma.matrix)

# Convert all NA to 0
df[is.na(df)] <- 0

# Calculate Returns from open to close
#df$VFINX.ocret <- apply(df[,c('VFINX.Open', 'VFINX.Adj.Close')], 1, function(x) { (x[2]-x[1])/x[1]} )
#df$VFITX.ocret <- apply(df[,c('VFITX.Open', 'VFITX.Adj.Close')], 1, function(x) { (x[2]-x[1])/x[1]} )

# Calculate Close-to-Close returns
df$VFINX.clret <- ROC(df$VFINX.Adj.Close, type = c("discrete"))
df$VFITX.clret <- ROC(df$VFITX.Adj.Close, type = c("discrete"))
df$VFINX.clret[1] <- 0
df$VFITX.clret[1] <- 0

# Add leverage multiplier
#df$VFINX.clret <- df$VFINX.clret * 3

# Subset Date
df <- subset(df, Date >= as.POSIXct("1991-10-01") ) 

# Name indicators #
VFINX.sma <- df$adj.close.sma.n9

# Enter buy / sell rules
df$signal.long.stocks <- ifelse(df$VFINX.Adj.Close > VFINX.sma, 1,0)
df$signal.long.bonds <- ifelse(df$VFINX.Adj.Close < VFINX.sma, 1,0)

# lag signal by one forward day to signal entry next day
df$signal.long.stocks <- lag(df$signal.long.stocks,1) # Note k=1 implies a move *forward*
df$signal.long.bonds <- lag(df$signal.long.bonds,1) # Note k=1 implies a move *forward*

df[is.na(df)] <- 0  # Set NA to 0

#Plot VIFNX Monthly with 9sma
plot(df$VFINX.Adj.Close, col = ifelse(df$VFINX.Adj.Close < VFINX.sma,'red','black'), pch = 10, cex=.5,ylab="VFINX Close",main="VFINX Below Monthly 9sma")

# Calculate equity curves
# Long Stocks
df <- df %>%
  dplyr::mutate(RunID = rleid(signal.long.stocks)) %>%
  group_by(RunID) %>%
  dplyr::mutate(long.stocks.equity.curve = ifelse(signal.long.stocks== 0, 0,
                                         ifelse(row_number() == 1, VFINX.clret, VFINX.clret))) %>%
  ungroup() %>%
  select(-RunID)

# Long Bonds
df <- df %>%
  dplyr::mutate(RunID = rleid(signal.long.bonds)) %>%
  group_by(RunID) %>%
  dplyr::mutate(long.bonds.equity.curve = ifelse(signal.long.bonds == 0, 0,
                                                  ifelse(row_number() == 1, VFITX.clret, VFITX.clret))) %>%
  ungroup() %>%
  select(-RunID)

# Combine Signals
df$combined.equity.curve <- df$long.stocks.equity.curve + df$long.bonds.equity.curve

# Pull select columns from data frame to make XTS whilst retaining formats
xts1 = xts(df$long.stocks.equity.curve, order.by=as.Date(df$Date, format="%m/%d/%Y"))
xts2 = xts(df$long.bonds.equity.curve, order.by=as.Date(df$Date, format="%m/%d/%Y"))
xts3 = xts(df$combined.equity.curve, order.by=as.Date(df$Date, format="%m/%d/%Y"))
xts4 = xts(df$VFINX.clret, order.by=as.Date(df$Date, format="%m/%d/%Y")) 

# Join XTS together
compare <- cbind(xts3,xts2,xts1,xts4)

# Use the PerformanceAnalytics package for trade statistics

require(PerformanceAnalytics)
colnames(compare) <- c("Long.Stocks.Switch.Bonds","Long.Bonds","Long.Stocks","Buy And Hold")
charts.PerformanceSummary(compare,main="Long Stocks(VFINX) Over 200sma, Long Bonds(VFITX) Under 200sma", wealth.index=TRUE, colorset=rainbow12equal)
performance.table <- rbind(table.AnnualizedReturns(compare),maxDrawdown(compare), CalmarRatio(compare),table.DownsideRisk(compare))
drawdown.table <- rbind(table.Drawdowns(compare))
#dev.off()
#logRets <- log(cumprod(1+compare))
#chart.TimeSeries(logRets, legend.loc='topleft', colorset=rainbow12equal)

print(performance.table)
print(drawdown.table)
Return.cumulative(xts3, geometric = TRUE)
Advertisements

Half life of Mean Reversion – Ornstein-Uhlenbeck Formula for Mean-Reverting Process

Ernie chan proposes a method to calculate the speed of mean reversion. He proposes to adjust the ADF (augmented dickey fuller test, more stringent) formula from discrete time to differential form. This takes shape of the Ornstein-Uhlenbeck Formula for mean reverting process. Ornstein Uhlenbeck Process – Wikipedia

dy(t) = (λy(t − 1) + μ)dt + dε

Where dε is some Gaussian noise. Chan goes on to mention that using the discrete ADF formula below:

Δy(t) = λy(t − 1) + μ + βt + α1Δy(t − 1) + … + αkΔy(t − k) + ∋t

and performing a linear regression of Δy(t) against y(t − 1) provides λ which is then used in the first equation. However, the advantage of writing the formula in differential form is it allows an analytical solution for the expected value of y(t).

E( y(t)) = y0exp(λt) − μ/λ(1 − exp(λt))

Mean reverting series exhibit negative λ. Conversely positive λ means the series doesn’t revert back to the mean.

When λ is negative, the value of price decays exponentially to the value −μ/λ with the half-life of decay equals to −log(2)/λ. See references.

We can perform the regression of yt-1 and (yt-1-yt) with the below R code on the SPY price series. For this test we will use a look back period of 100 days versus the entire price series (1993 inception to present). If we used all of the data, we would be including how long it takes to recover from bear markets. For trading purposes, we wish to use a shorter sample of data in order to produce a more meaningful statistical test.

The procedure:
1. Lag SPY close by -1 day
2. Subtract todays close – yesterdays close
3. Subtract (todays close – yesterdays close) – mean(todays close – yesterdays close)
4. Perform linear regression of (today close – yesterday) ~ (todays close – yesterdays close) – mean(todays close – yesterdays close)
5. On regression output perform -log(2)/λ

# Calculate yt-1 and (yt-1-yt)
y.lag <- c(random.data[2:length(random.data)], 0) # Set vector to lag -1 day
y.lag <- y.lag[1:length(y.lag)-1] # As shifted vector by -1, remove anomalous element at end of vector
random.data <- random.data[1:length(random.data)-1] # Make vector same length as vector y.lag
y.diff <- random.data - y.lag # Subtract todays close from yesterdays close
y.diff <- y.diff [1:length(y.diff)-1] # Make vector same length as vector y.lag
prev.y.mean <- y.lag - mean(y.lag) # Subtract yesterdays close from the mean of lagged differences
prev.y.mean <- prev.y.mean [1:length(prev.y.mean )-1] # Make vector same length as vector y.lag
final.df <- as.data.frame(final) # Create final data frame

# Linear Regression With Intercept
result <- lm(y.diff ~ prev.y.mean, data = final.df)
half_life <- -log(2)/coef(result)[2]
half_life

# Linear Regression With No Intercept
result = lm(y.diff ~ prev.y.mean + 0, data = final.df)
half_life1 = -log(2)/coef(result)[1]
half_life1

# Print general linear regression statistics
summary(result)

regress

regress..

Observing the output of the above regression we see that the slope is negative and is a mean revering process. We see from summary(results) λ is -0.06165 and when we perform -log(2)/λ we obtain a mean reversion half life of 11.24267 days.

11.24267 days is the half life of mean reversion which means we anticipate the series to fully revert to the mean by 2 * the half life or 22.48534 days. However, to trade mean reversion profitably we need not exit directly at the mean each time. Essentially if a trade extended over 22 days we may expect a short term or permanent regime shift. One may insulate against such defeats by setting a ‘time stop’.

The obtained 11.24267 day half life is short enough for a interday trading horizon. If we obtained a longer half life we may be waiting a long time for the series to revert back to the mean. Once we determine that the series is mean reverting we can trade this series profitably with a simple linear model using a look back period equal to the half life. In a previous post we explored a simple linear zscore model: https://flare9xblog.wordpress.com/2017/09/24/simple-linear-strategy-for-sp500/

The lookback period of 11 days was obtained using a ‘brute force approach’ (maybe luck). An optimal look back period of 11 days produced the best result for the SPY.

Post brute forcing, it was noted during optimization of the above strategy that adjusting the half life from 11 days to any number above or below, we experienced a decrease in performance.

We illustrate the effect of moving the look back period shorter and longer than the obtained half life. For simplicity, we will use the total cumulative returns for comparison:

10

11.

12

We see that a look back of 11 days produced the highest cumulative compounded returns.

Ernie Chan goes on to mention that ‘why bother with statistical testing’. The answer lies in the fact that specific trading rules only trigger when their conditions are met and therefore tend to skip over data. Statistical testing includes data that a model may skip over and thus produce results with higher statistical significance.

Furthermore, once we confirm a series is mean reverting we can be assured to find a profitable trading strategy and not per se the strategy that we just back tested.

References
Algorithmic Trading: Winning Strategies and Their Rationale – May 28, 2013, by Ernie Chan

Modelling The Hurst Exponent

One of the purposes of using the Hurst Exponent is to validate whether a price series is momentum, random walk or mean reverting. If we know this type of information, we may ‘fit’ a model to capture the nature of the series.

The Hurst exponent is categorized as:
H <0.5 = mean reverting
H == 0.5 = random walk 
H >0.5 = momentum

Editable parameters:
mu = mean # Change mean value
eta = theta # Try decreasing theta for less mean reversion, increase for more mean reversion
sigma = standard deviation # Change the height of the peaks and valleys with standard deviation

# Create OU simulation
OU.sim <- function(T = 1000, mu = 0.75, eta = 0.3, sigma = 0.05){
  P_0 = mu # Starting price is the mean
  P = rep(P_0,T)
  for(i in 2:T){
    P[i] = P[i-1] + eta * (mu - P[i-1]) + sigma * rnorm(1) * P[i-1]
  }
  return(P)
}

# Plot
plot(OU.sim(), type="l", main="Mean Reversion Sim")

# Save plot to data frame
plot.df <- data.frame(OU.sim())
plot(plot.df$OU.sim.., type="l",main="Mean Reversion Sim")

Rplot05

Looks pretty mean reverting.

We stored the simulation in a data frame so lets run the Hurst exponent to see which H value we obtain.

# Hurst Exponent (varying lags)
require(magrittr)
require(zoo)
require(lattice)

#Create lagged variables
lags <- 2:20

# Function for finding differences in lags. Todays Close - 'n' lag period
getLAG.DIFF <- function(lagdays) {
  function(plot.df) {
    c(rep(NA, lagdays), diff(plot.df$OU.sim.., lag = lagdays, differences = 1, arithmetic = TRUE, na.pad = TRUE))
  }
}
# Create a matrix to put the lagged differences in
lag.diff.matrix <- matrix(nrow=nrow(plot.df), ncol=0)

# Loop for filling it
for (i in lags) {
  lag.diff.matrix <- cbind(lag.diff.matrix, getLAG.DIFF(i)(plot.df))
}

# Rename columns
colnames(lag.diff.matrix) <- sapply(lags, function(n)paste("lagged.diff.n", n, sep=""))

# Bind to existing dataframe
plot.df <-  cbind(plot.df, lag.diff.matrix)

# Calculate Variances of 'n period' differences
variance.vec <- apply(plot.df[,2:ncol(plot.df)], 2, function(x) var(x, na.rm=TRUE))

# Linear regression of log variances vs log lags
log.linear <- lm(formula = log(variance.vec) ~ log(lags))
# Print general linear regression statistics
summary(log.linear)
# Plot log of variance 'n' lags vs log time
xyplot(log(variance.vec) ~ log(lags),
       main="SPY Daily Price Differences Variance vs Time Lags",
       xlab = "Time",
       ylab = "Logged Variance 'n' lags",
       grid = TRUE,
       type = c("p","r"),col.line = "red",
       abline=(h = 0)) 

hurst.exponent = coef(log.linear)[2]/2
hurst.exponent

We obtain a hurst exponent of 0.1368407
which is significantly mean reverting.

Lets change some of the parameters of the simulation to create a moderately mean reverting series. We can alter the theta, if we change eta = 0.3 to eta = 0.04 we obtain this output:

Rplot06

Looks less mean reverting than the first and H = 0.4140561. This is below H 0.50 and is considered mean reverting.

Let us test the SPY from 1993 (inception) to present (9.23.2017) to see what the H value is. The below chart output is a linear regression between the SPY lagged log differences and the log time. The Hurst exponent is the slope / 2 (code included).

Rplot07

The Hurst exponent for the SPY daily bars on time lags 2:20 is 0.4378202. We know that price series display different characteristics over varying time frames. If we simply plot the SPY daily closes:

Rplot10

Observing the  long term trend we see that the series looks more trending or momentum. We already tested a 2:20 day lagged period which is H value of 0.4378202 and if place the lags from 6 months to 1 and a half years (126:378 trading days) we see that H=0.6096454 and is on the momentum side of the scale.

So far – it is as expected.

What does a random series look like?

We can create this using randn from the ramify package. We simply cumsum each random generated data point and add a small positive drift to make it a trending series.

# Plot Random Walk With A Trend
require(ramify)
random.walk = cumsum(randn(10000)+0.025)
plot(random.walk, type="l", main="Random Walk")

# Random Walk Data Frame
random.df <- data.frame(cumsum(randn(10000)+0.03))
colnames(random.df)[1] <- "random"
plot(random.df$random, type="l", main="Random Walk")

Rplot09

The H for this series (lags 2:20) is 0.4999474 which rounded is 0.50 a random walk.

It would seem that based on the statistical tests the Hurst exponent is somewhat accurate in reflecting the nature of the series. It should be noted that different lags produce different regimes. 2:20 lags exhibit stronger mean reversion, on a 6 month to 1 and a half year time period (lags 126:378) the market exhibited stronger momentum H 0.6096454. At lags 50:100 its close to a random walk at H 0.5093078. What does this mean? Not only when optimizing models, we must optimize time frames.

To recap we:

1. Created a mean reverting price series with mu = 0.75, eta = 0.3, sigma = 0.05
2. We saved the output to a data frame and used the hurst calculation (linear regression of log lagged price differences vs log time) over a 2:20 lagged period to obtain the H value. See this post for more information on the hurst exponent calculation: https://flare9xblog.wordpress.com/2017/08/11/hurst-exponent-in-r/
3. The result was significantly mean reverting as we expected.
4. We tested SPY closes 1993 to 9.23.2017. On a lagged period of 2:20 the series was mean reverting and on a 6 month to 1.5 year time period the series was more momentum. This was as expected.
5. We created a random set of numbers and added a small drift to each data point to create a random walk trend. We obtained a H value of 0.5 rounded. Which is as expected.

The parameters for the simulated series can be edited to change the characteristics and the Hurst exponent can be calculated on each output. Try making the series more mean reverting or less mean reverting and the H value should adjust accordingly.

Full R code below:


# Modelling different price series 
# Mean reverison, random and momentum 
# Andrew Bannerman 9.24.2017

# Create OU simulation
# mu = mean
# eta = theta # Try decreasing theta for less mean reversion, increase for more mean reversion
# sigma = standard deviation # Change the height of the peaks and valleys with standard deviation
OU.sim <- function(T = 1000, mu = 0.75, eta = 0.04, sigma = 0.05){
  P_0 = mu # Starting price is the mean
  P = rep(P_0,T)
  for(i in 2:T){
    P[i] = P[i-1] + eta * (mu - P[i-1]) + sigma * rnorm(1) * P[i-1]
  }
  return(P)
}

# Plot
plot(OU.sim(), type="l", main="Mean Reversion Sim")

# Save plot to data frame 
plot.df <- data.frame(OU.sim())
plot(plot.df$OU.sim.., type="l",main="Mean Reversion Sim")

# Hurst Exponent Mean Reversion (varying lags)
require(magrittr)
require(zoo)
require(lattice)

#Create lagged variables
lags <- 2:20

# Function for finding differences in lags. Todays Close - 'n' lag period
getLAG.DIFF <- function(lagdays) {
  function(plot.df) {
    c(rep(NA, lagdays), diff(plot.df$OU.sim.., lag = lagdays, differences = 1, arithmetic = TRUE, na.pad = TRUE))
  }
}
# Create a matrix to put the lagged differences in
lag.diff.matrix <- matrix(nrow=nrow(plot.df), ncol=0)

# Loop for filling it
for (i in 2:20) {
  lag.diff.matrix <- cbind(lag.diff.matrix, getLAG.DIFF(i)(plot.df))
}

# Rename columns
colnames(lag.diff.matrix) <- sapply(2:20, function(n)paste("lagged.diff.n", n, sep=""))

# Bind to existing dataframe
plot.df <-  cbind(plot.df, lag.diff.matrix)

# Calculate Variances of 'n period' differences
variance.vec <- apply(plot.df[,2:ncol(plot.df)], 2, function(x) var(x, na.rm=TRUE))

# Linear regression of log variances vs log lags
log.linear <- lm(formula = log(variance.vec) ~ log(lags))  
# Print general linear regression statistics  
summary(log.linear) 
# Plot log of variance 'n' lags vs log time  
xyplot(log(variance.vec) ~ log(lags),         
       main="SPY Daily Price Differences Variance vs Time Lags",        
       xlab = "Time",        
       ylab = "Logged Variance 'n' lags",       
       grid = TRUE,        
       type = c("p","r"),col.line = "red",        
       abline=(h = 0)) 

hurst.exponent = coef(log.linear)[2]/2
hurst.exponent

# Write output to file write.csv(new.df,file="G:/R Projects/hurst.csv")

  # Plot Random Walk With A Trend
  require(ramify)
  random.walk = cumsum(randn(10000)+0.025)
  plot(random.walk, type="l", main="Random Walk")
  
  # Random Walk Data Frame 
  random.df <- data.frame(cumsum(randn(10000)+0.03))
  colnames(random.df)[1] <- "random"
  plot(random.df$random, type="l", main="Random Walk")

# Hurst Exponent Random Walk (varying lags)
require(magrittr)
require(zoo)
require(lattice)

#Create lagged variables
lags <- 2:20

# Function for finding differences in lags. Todays Close - 'n' lag period
getLAG.DIFF <- function(lagdays) {
  function(random.df) {
    c(rep(NA, lagdays), diff(random.df$random, lag = lagdays, differences = 1, arithmetic = TRUE, na.pad = TRUE))
  }
}
# Create a matrix to put the lagged differences in
lag.diff.matrix <- matrix(nrow=nrow(random.df), ncol=0)

# Loop for filling it
for (i in 2:20) {
  lag.diff.matrix <- cbind(lag.diff.matrix, getLAG.DIFF(i)(random.df))
}

# Rename columns
colnames(lag.diff.matrix) <- sapply(2:20, function(n)paste("lagged.diff.n", n, sep=""))

# Bind to existing dataframe
random.df <-  cbind(random.df, lag.diff.matrix)

# Calculate Variances of 'n period' differences
variance.vec <- apply(random.df[,2:ncol(random.df)], 2, function(x) var(x, na.rm=TRUE))

# Linear regression of log variances vs log lags
log.linear <- lm(formula = log(variance.vec) ~ log(lags))  
# Print general linear regression statistics  
summary(log.linear) 
# Plot log of variance 'n' lags vs log time  
xyplot(log(variance.vec) ~ log(lags),         
       main="SPY Daily Price Differences Variance vs Time Lags",        
       xlab = "Time",        
       ylab = "Logged Variance 'n' lags",       
       grid = TRUE,        
       type = c("p","r"),col.line = "red",        
       abline=(h = 0)) 

hurst.exponent = coef(log.linear)[2]/2
hurst.exponent

References
Algorithmic Trading: Winning Strategies and Their Rationale – May 28, 2013, by Ernie Chan

Simple S&P500 Linear Strategy

From our hurst exponent test during a previous post we determined that the S&P500 was a mean reverting series on an interday time frame. For this back test we have created a very simple linear model to ‘fit’ the nature of the series.

We use a rolling ‘n’ day zscore and we test using a look back of 11 days, for some reason the lookback of 11 days provided the best results. For the sake of simplicity we will test over the full sample period and we will not include any commissions or slippage.

We will use the SPY for this back test from inception to present.

Here are the parameters:
1. Rolling lookback period of z-score = 11 days
2. Entry level = < -0.2
3. Exit Time = 4 days

As you can see the parameters are kept to a minimum. The main goal here is to fit a model to the existing anomaly versus force fit a model to the data.

11.

result.

As we can see we obtain better performance than buy and hold. We have about a 21% better DD than buy and hold and a 10% annualized from 1993 to present. Notice the time invested is 48%. It means we have capital available to execute to other strategies. To offset the draw downs even further we could add another uncorrelated strategy but this as a base is not a bad attempt to capture the mean reversion nature of the S&P500 on an interday basis. Note this might not be the most stellar performance but its good example to illustrate the train of thought.

Some questions to ponder?

1. What happens if we ‘fit’ a mean reversion model to S&p500 daily data from 1929 to 1980?
2. What happens if we fit a momentum model to S&p500 daily data from 1929 to 1980?
3. What happens if we fit a momentum model for S&P500 daily data from 1993 to present?

Those questions good readers shall leave for you to answer!

The rules of this system are:
1. When zscore crosses below <-0.2 buy next days open
2. After 4 days has passed sell all, if another signal is generated while in the existing trade we will take the next signal also

Notes about the rules of the backtest:
1. We calculate S&P500 close to close returns and also open to close returns. We do this in order to set the first buying day to calculate an open to close return (At the end of a day a signal is generated, we then buy next days open, thus the returns for the first day are open to close returns, on 2 day and over the returns are close to close returns.)
2. To avoid look ahead bias, after a signal is generated, we lag +1 the time series forward a day so that we are simulating buying the NEXT days open.

All parameters can be optimized and entry and exit rules adjusted. Some ideas could be:

1. For exit, exit on specific zscore value… ie buy <-0.20 and sell when it crossed back over 0.
2. For exit, ignore repeat trades while in existing trade
3. Exit soon as the price is higher than our entry

Different rules produce slightly different results. I may go into some of these different selling rules and the code for them in future posts! For now this is a good start 🙂

# Calculate rolling z-score of SPY close price
# Sell at fixed n day hold
# 8.13.2017
# Andrew Bannerman
require(lubridate)
require(dplyr)
require(magrittr)
require(TTR)
require(zoo)
require(data.table)
require(xts)
require(PerformanceAnalytics)

# Data path
data.dir <- "D:/R Projects"
data.read.spx <- paste(data.dir,"SPY.csv",sep="/")

# Read data
read.spx <- read.csv(data.read.spx,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE)

# Make dataframe
new.df <- data.frame(read.spx)
tail(new.df)
# Convert Values To Numeric
cols <-c(3:8)
new.df[,cols] %% lapply(function(x) as.numeric(as.character(x)))

#Convert Date Column [1]
new.df$Date <- ymd(new.df$Date)

# Use TTR package to create rolling SMA n day moving average
# Create function and loop in order to repeat the desired number of SMAs for example 2:30
getSMA <- function(numdays) {
function(new.df) {
SMA(new.df[,"Close"], numdays) # Calls TTR package to create SMA
}
}
# Create a matrix to put the SMAs in
sma.matrix <- matrix(nrow=nrow(new.df), ncol=0)

# Loop for filling it
for (i in 2:60) {
sma.matrix <- cbind(sma.matrix, getSMA(i)(new.df))
}

# Rename columns
colnames(sma.matrix) <- sapply(2:60, function(n)paste("close.sma.n", n, sep=""))

# Bind to existing dataframe
new.df <- cbind(new.df, sma.matrix)

# Use TTR package to create rolling Standard Deviation
# Create function and loop in order to repeat the desired number of Stdev for example 2:30
getSD <- function(numdays) {
function(new.df) {
runSD(new.df$Close, numdays, cumulative = FALSE) # Calls TTR package to create SMA
}
}
# Create a matrix to put the SMAs in
sd.matrix <- matrix(nrow=nrow(new.df), ncol=0)

# Loop for filling it
for (i in 2:60) {
sd.matrix <- cbind(sd.matrix, getSD(i)(new.df))
}

# Rename columns
colnames(sd.matrix) <- sapply(2:60, function(n)paste("close.sd.n", n, sep=""))

# Bind to existing dataframe
new.df <- cbind(new.df, sd.matrix)

# Use base R to work out the rolling z-score (Close – roll mean) / stdev
new.df$close.zscore.n2 <- apply(new.df[,c('Close','close.sma.n2', 'close.sd.n2')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n3 <- apply(new.df[,c('Close','close.sma.n3', 'close.sd.n3')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n4 <- apply(new.df[,c('Close','close.sma.n4', 'close.sd.n4')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n5 <- apply(new.df[,c('Close','close.sma.n5', 'close.sd.n5')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n6 <- apply(new.df[,c('Close','close.sma.n6', 'close.sd.n6')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n7 <- apply(new.df[,c('Close','close.sma.n7', 'close.sd.n7')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n8 <- apply(new.df[,c('Close','close.sma.n8', 'close.sd.n8')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n9 <- apply(new.df[,c('Close','close.sma.n9', 'close.sd.n9')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n10 <- apply(new.df[,c('Close','close.sma.n10', 'close.sd.n10')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n11 <- apply(new.df[,c('Close','close.sma.n11', 'close.sd.n11')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n12 <- apply(new.df[,c('Close','close.sma.n12', 'close.sd.n12')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n13 <- apply(new.df[,c('Close','close.sma.n13', 'close.sd.n13')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n14 <- apply(new.df[,c('Close','close.sma.n14', 'close.sd.n14')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n15 <- apply(new.df[,c('Close','close.sma.n15', 'close.sd.n15')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n16 <- apply(new.df[,c('Close','close.sma.n16', 'close.sd.n16')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n17 <- apply(new.df[,c('Close','close.sma.n17', 'close.sd.n17')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n18 <- apply(new.df[,c('Close','close.sma.n18', 'close.sd.n18')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n19 <- apply(new.df[,c('Close','close.sma.n19', 'close.sd.n19')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n20 <- apply(new.df[,c('Close','close.sma.n20', 'close.sd.n20')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n21 <- apply(new.df[,c('Close','close.sma.n21', 'close.sd.n21')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n22 <- apply(new.df[,c('Close','close.sma.n22', 'close.sd.n22')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n23 <- apply(new.df[,c('Close','close.sma.n23', 'close.sd.n23')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n24 <- apply(new.df[,c('Close','close.sma.n24', 'close.sd.n24')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n25 <- apply(new.df[,c('Close','close.sma.n25', 'close.sd.n25')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n26 <- apply(new.df[,c('Close','close.sma.n26', 'close.sd.n26')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n27 <- apply(new.df[,c('Close','close.sma.n27', 'close.sd.n27')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n28 <- apply(new.df[,c('Close','close.sma.n28', 'close.sd.n28')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n29 <- apply(new.df[,c('Close','close.sma.n29', 'close.sd.n29')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n30 <- apply(new.df[,c('Close','close.sma.n30', 'close.sd.n30')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n31 <- apply(new.df[,c('Close','close.sma.n31', 'close.sd.n31')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n32 <- apply(new.df[,c('Close','close.sma.n32', 'close.sd.n32')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n33 <- apply(new.df[,c('Close','close.sma.n33', 'close.sd.n33')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n34 <- apply(new.df[,c('Close','close.sma.n34', 'close.sd.n34')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n35 <- apply(new.df[,c('Close','close.sma.n35', 'close.sd.n35')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n36 <- apply(new.df[,c('Close','close.sma.n36', 'close.sd.n36')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n37 <- apply(new.df[,c('Close','close.sma.n37', 'close.sd.n37')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n38 <- apply(new.df[,c('Close','close.sma.n38', 'close.sd.n38')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n39 <- apply(new.df[,c('Close','close.sma.n39', 'close.sd.n39')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n40 <- apply(new.df[,c('Close','close.sma.n40', 'close.sd.n40')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n41 <- apply(new.df[,c('Close','close.sma.n41', 'close.sd.n41')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n42 <- apply(new.df[,c('Close','close.sma.n42', 'close.sd.n42')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n43 <- apply(new.df[,c('Close','close.sma.n43', 'close.sd.n43')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n44 <- apply(new.df[,c('Close','close.sma.n44', 'close.sd.n44')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n45 <- apply(new.df[,c('Close','close.sma.n45', 'close.sd.n45')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n46 <- apply(new.df[,c('Close','close.sma.n46', 'close.sd.n46')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n47 <- apply(new.df[,c('Close','close.sma.n47', 'close.sd.n47')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n48 <- apply(new.df[,c('Close','close.sma.n48', 'close.sd.n48')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n49 <- apply(new.df[,c('Close','close.sma.n49', 'close.sd.n49')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n50 <- apply(new.df[,c('Close','close.sma.n50', 'close.sd.n50')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n51 <- apply(new.df[,c('Close','close.sma.n51', 'close.sd.n51')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n52 <- apply(new.df[,c('Close','close.sma.n52', 'close.sd.n52')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n53 <- apply(new.df[,c('Close','close.sma.n53', 'close.sd.n53')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n54 <- apply(new.df[,c('Close','close.sma.n54', 'close.sd.n54')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n55 <- apply(new.df[,c('Close','close.sma.n55', 'close.sd.n55')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n56 <- apply(new.df[,c('Close','close.sma.n56', 'close.sd.n56')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n57 <- apply(new.df[,c('Close','close.sma.n57', 'close.sd.n57')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n58 <- apply(new.df[,c('Close','close.sma.n58', 'close.sd.n58')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n59 <- apply(new.df[,c('Close','close.sma.n59', 'close.sd.n59')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n60 <- apply(new.df[,c('Close','close.sma.n60', 'close.sd.n60')], 1, function(x) { (x[1]-x[2])/x[3] } )

# Convert all NA to 0
new.df[is.na(new.df)] <- 0

# Calculate quartiles, where close is relation to range (Close – High) / (High – Low)
#new.df$quartile <- apply(new.df[,c('Close', 'Low', 'High')], 1, function(x) { (x[1]-x[2])/(x[3]-x[2])} )

# Calculate Returns from open to close
new.df$ocret <- apply(new.df[,c('Open', 'Close')], 1, function(x) { (x[2]-x[1])/x[1]} )

# Calculate Close-to-Close returns
new.df$clret <- ROC(new.df$Close, type = c("discrete"))
new.df$clret[1] <- 0

# Name indicators
indicator <- new.df$close.zscore.n12

# Create Long Signals
new.df$signal <- ifelse(indicator < -.2, 1, 0)

######## Fixed 'n' day hold logic ##########
# Variable for loop
indicator <- new.df$signal

# Create Vector From Signal
signal.1 <- c(indicator)

# Set variable for number of days to hold
n.day <- 4

# Loop for fixed 'n' day hold
res <- NULL
while (length(res) < length(signal.1)) {
if (signal.1[length(res)+1] == 1) {
res <- c(res, rep(1,n.day))
} else {
res <- c(res, 0)
}
}
res <- res[1:length(signal.1)]
new.df <- data.frame(new.df,response = res)

# lag signal by one forward day to signal entry next day
new.df$response %
group_by(RunID) %>%
mutate(equity.curve = ifelse(response == 0, 0,
ifelse(row_number() == 1, ocret, clret))) %>%
ungroup() %>%
select(-RunID)

# Pull select columns from data frame to make XTS whilst retaining formats
xts1 = xts(new.df$equity.curve, order.by=as.Date(new.df$Date, format=”%m/%d/%Y”))
xts2 = xts(new.df$clret, order.by=as.Date(new.df$Date, format=”%m/%d/%Y”))

# Join XTS together
compare <- cbind(xts1,xts2)

# Use the PerformanceAnalytics package for trade statistics
# install.packages("PerformanceAnalytics")
require(PerformanceAnalytics)
colnames(compare) <- c("Mean Reversion","Buy And Hold")
charts.PerformanceSummary(compare,main="Cumulative Returns", wealth.index=TRUE, colorset=rainbow12equal)
#png(filename="20090606_rsi2_performance_updated.png", 720, 720)
performance.table <- rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare),table.DownsideRisk(compare))
drawdown.table <- rbind(table.Drawdowns(compare))
#dev.off()
logRets %
group_by(RunID) %>%
dplyr::mutate(perc.output = ifelse(response == 0, 0,
ifelse(row_number() == n(),
(last(Close) – first(Open))/first(Open), 0))) %>%
ungroup() %>%
select(-RunID)

# All NA to 0
new.df[is.na(new.df)] <- 0
# Win / Loss %
# 1 Day Hold Trades
winning.trades '0', na.rm=TRUE)
losing.trades <- sum(new.df$equity.curve < '0', na.rm=TRUE)
total.days <- NROW(new.df$equity.curve)
# Multi Day Hold Trades
multi.winning.trades '0', na.rm=TRUE)
multi.losing.trades <- sum(new.df$perc.output < '0', na.rm=TRUE)
multi.total.days <- NROW(new.df$perc.output)
multi.losing.trades
# % Time Invested (Same column for 1 day and multi hold trades)
time.invested <- (winning.trades + losing.trades) / total.days
winning.trades + losing.trades
winning.trades
losing.trades

# Calcualte win loss %
# 1 Day Hold Trades
total <- winning.trades + losing.trades
win.percent <- winning.trades / total
loss.percent <- losing.trades / total
# Multi Day Hold Trades
multi.total <- multi.winning.trades + multi.losing.trades
multi.win.percent <- multi.winning.trades / multi.total
multi.loss.percent <- multi.losing.trades / multi.total
# Calculate Consecutive Wins Loss
# 1 Day Hold Trades
remove.zero <- new.df[-which(new.df$equity.curve == 0 ), ] # removing rows 0 values
consec.wins <- max(rle(sign(remove.zero$equity.curve))[[1]][rle(sign(remove.zero$equity.curve))[[2]] == 1])
consec.loss <- max(rle(sign(remove.zero$equity.curve))[[1]][rle(sign(remove.zero$equity.curve))[[2]] == -1])

# Multi Day Hold Trades
multi.remove.zero <- new.df[-which(new.df$perc.output == 0 ), ] # removing rows 0 values
multi.consec.wins <- max(rle(sign(multi.remove.zero$perc.output))[[1]][rle(sign(multi.remove.zero$perc.output))[[2]] == 1])
multi.consec.loss <-max(rle(sign(multi.remove.zero$perc.output))[[1]][rle(sign(multi.remove.zero$perc.output))[[2]] == -1])

# Calculate Summary Statistics
# 1 Day Hold Trades OR all days if multi holding
average.trade <- mean(new.df$equity.curve)
average.win 0])
average.loss <- mean(new.df$equity.curve[new.df$equity.curve <0])
median.win 0])
median.loss <- median(new.df$equity.curve[new.df$equity.curve <0])
max.gain <- max(new.df$equity.curve)
max.loss <- min(new.df$equity.curve)
win.loss.ratio <- winning.trades / abs(losing.trades)
summary <- cbind(winning.trades,losing.trades,win.percent,loss.percent,win.loss.ratio,time.invested,average.trade,average.win,average.loss,median.win,median.loss,consec.wins,consec.loss,max.gain,max.loss)
summary <- as.data.frame(summary)
colnames(summary) <- c("Winning Trades","Losing Trades","Win %","Loss %","Win Loss Ratio","Time Invested","Average Trade","Average Win","Average Loss","Median Gain","Median Loss","Consec Wins","Consec Loss","Maximum Win","Maximum Loss")
print(summary)

# Multi Day Hold Trades
multi.average.trade <- mean(new.df$perc.output)
multi.average.win 0])
multi.average.loss <- mean(new.df$perc.output[new.df$perc.output <0])
multi.median.win 0])
multi.median.loss <- median(new.df$perc.output[new.df$perc.output <0])
multi.win.loss.ratio <- multi.average.win / abs(multi.average.loss)
multi.max.gain <- max(new.df$perc.output)
multi.max.loss <- min(new.df$perc.output)
multi.summary <- cbind(multi.winning.trades,multi.losing.trades,multi.win.percent,multi.loss.percent,multi.win.loss.ratio,time.invested,multi.average.trade,multi.average.win,multi.average.loss,multi.median.win,multi.median.loss,multi.consec.wins,multi.consec.loss,multi.max.gain,multi.max.loss)
multi.summary <- as.data.frame(multi.summary)
colnames(multi.summary) <- c("Winning Trades","Losing Trades","Win %","Loss %","Win Loss Ratio","Time Invested","Average Trade","Average Win","Average Loss","Median Gain","Median Loss","Consec Wins","Consec Loss","Maximum Win","Maximum Loss")
print(multi.summary)
print(performance.table)
print(drawdown.table)
table.Drawdowns(xts1, top=10)
charts.PerformanceSummary(compare,main="Cumulative Returns",wealth.index=TRUE,colorset=rainbow12equal)

# Write output to file
write.csv(new.df,file="D:/R Projects/spy_mean.csv")