Simple S&P500 Linear Strategy

From our hurst exponent test during a previous post we determined that the S&P500 was a mean reverting series on an interday time frame. For this back test we have created a very simple linear model to ‘fit’ the nature of the series.

We use a rolling ‘n’ day zscore and we test using a look back of 11 days, for some reason the lookback of 11 days provided the best results. For the sake of simplicity we will test over the full sample period and we will not include any commissions or slippage.

We will use the SPY for this back test from inception to present.

Here are the parameters:
1. Rolling lookback period of z-score = 11 days
2. Entry level = < -0.2
3. Exit Time = 4 days

As you can see the parameters are kept to a minimum. The main goal here is to fit a model to the existing anomaly versus force fit a model to the data.

11.

result.

As we can see we obtain better performance than buy and hold. We have about a 21% better DD than buy and hold and a 10% annualized from 1993 to present. Notice the time invested is 48%. It means we have capital available to execute to other strategies. To offset the draw downs even further we could add another uncorrelated strategy but this as a base is not a bad attempt to capture the mean reversion nature of the S&P500 on an interday basis. Note this might not be the most stellar performance but its good example to illustrate the train of thought.

Some questions to ponder?

1. What happens if we ‘fit’ a mean reversion model to S&p500 daily data from 1929 to 1980?
2. What happens if we fit a momentum model to S&p500 daily data from 1929 to 1980?
3. What happens if we fit a momentum model for S&P500 daily data from 1993 to present?

Those questions good readers shall leave for you to answer!

The rules of this system are:
1. When zscore crosses below <-0.2 buy next days open
2. After 4 days has passed sell all, if another signal is generated while in the existing trade we will take the next signal also

Notes about the rules of the backtest:
1. We calculate S&P500 close to close returns and also open to close returns. We do this in order to set the first buying day to calculate an open to close return (At the end of a day a signal is generated, we then buy next days open, thus the returns for the first day are open to close returns, on 2 day and over the returns are close to close returns.)
2. To avoid look ahead bias, after a signal is generated, we lag +1 the time series forward a day so that we are simulating buying the NEXT days open.

All parameters can be optimized and entry and exit rules adjusted. Some ideas could be:

1. For exit, exit on specific zscore value… ie buy <-0.20 and sell when it crossed back over 0.
2. For exit, ignore repeat trades while in existing trade
3. Exit soon as the price is higher than our entry

Different rules produce slightly different results. I may go into some of these different selling rules and the code for them in future posts! For now this is a good start 🙂

# Calculate rolling z-score of SPY close price
# Sell at fixed n day hold
# 8.13.2017
# Andrew Bannerman
require(lubridate)
require(dplyr)
require(magrittr)
require(TTR)
require(zoo)
require(data.table)
require(xts)
require(PerformanceAnalytics)

# Data path
data.dir <- "D:/R Projects"
data.read.spx <- paste(data.dir,"SPY.csv",sep="/")

# Read data
read.spx <- read.csv(data.read.spx,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE)

# Make dataframe
new.df <- data.frame(read.spx)
tail(new.df)
# Convert Values To Numeric
cols <-c(3:8)
new.df[,cols] %% lapply(function(x) as.numeric(as.character(x)))

#Convert Date Column [1]
new.df$Date <- ymd(new.df$Date)

# Use TTR package to create rolling SMA n day moving average
# Create function and loop in order to repeat the desired number of SMAs for example 2:30
getSMA <- function(numdays) {
function(new.df) {
SMA(new.df[,"Close"], numdays) # Calls TTR package to create SMA
}
}
# Create a matrix to put the SMAs in
sma.matrix <- matrix(nrow=nrow(new.df), ncol=0)

# Loop for filling it
for (i in 2:60) {
sma.matrix <- cbind(sma.matrix, getSMA(i)(new.df))
}

# Rename columns
colnames(sma.matrix) <- sapply(2:60, function(n)paste("close.sma.n", n, sep=""))

# Bind to existing dataframe
new.df <- cbind(new.df, sma.matrix)

# Use TTR package to create rolling Standard Deviation
# Create function and loop in order to repeat the desired number of Stdev for example 2:30
getSD <- function(numdays) {
function(new.df) {
runSD(new.df$Close, numdays, cumulative = FALSE) # Calls TTR package to create SMA
}
}
# Create a matrix to put the SMAs in
sd.matrix <- matrix(nrow=nrow(new.df), ncol=0)

# Loop for filling it
for (i in 2:60) {
sd.matrix <- cbind(sd.matrix, getSD(i)(new.df))
}

# Rename columns
colnames(sd.matrix) <- sapply(2:60, function(n)paste("close.sd.n", n, sep=""))

# Bind to existing dataframe
new.df <- cbind(new.df, sd.matrix)

# Use base R to work out the rolling z-score (Close – roll mean) / stdev
new.df$close.zscore.n2 <- apply(new.df[,c('Close','close.sma.n2', 'close.sd.n2')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n3 <- apply(new.df[,c('Close','close.sma.n3', 'close.sd.n3')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n4 <- apply(new.df[,c('Close','close.sma.n4', 'close.sd.n4')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n5 <- apply(new.df[,c('Close','close.sma.n5', 'close.sd.n5')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n6 <- apply(new.df[,c('Close','close.sma.n6', 'close.sd.n6')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n7 <- apply(new.df[,c('Close','close.sma.n7', 'close.sd.n7')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n8 <- apply(new.df[,c('Close','close.sma.n8', 'close.sd.n8')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n9 <- apply(new.df[,c('Close','close.sma.n9', 'close.sd.n9')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n10 <- apply(new.df[,c('Close','close.sma.n10', 'close.sd.n10')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n11 <- apply(new.df[,c('Close','close.sma.n11', 'close.sd.n11')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n12 <- apply(new.df[,c('Close','close.sma.n12', 'close.sd.n12')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n13 <- apply(new.df[,c('Close','close.sma.n13', 'close.sd.n13')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n14 <- apply(new.df[,c('Close','close.sma.n14', 'close.sd.n14')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n15 <- apply(new.df[,c('Close','close.sma.n15', 'close.sd.n15')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n16 <- apply(new.df[,c('Close','close.sma.n16', 'close.sd.n16')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n17 <- apply(new.df[,c('Close','close.sma.n17', 'close.sd.n17')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n18 <- apply(new.df[,c('Close','close.sma.n18', 'close.sd.n18')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n19 <- apply(new.df[,c('Close','close.sma.n19', 'close.sd.n19')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n20 <- apply(new.df[,c('Close','close.sma.n20', 'close.sd.n20')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n21 <- apply(new.df[,c('Close','close.sma.n21', 'close.sd.n21')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n22 <- apply(new.df[,c('Close','close.sma.n22', 'close.sd.n22')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n23 <- apply(new.df[,c('Close','close.sma.n23', 'close.sd.n23')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n24 <- apply(new.df[,c('Close','close.sma.n24', 'close.sd.n24')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n25 <- apply(new.df[,c('Close','close.sma.n25', 'close.sd.n25')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n26 <- apply(new.df[,c('Close','close.sma.n26', 'close.sd.n26')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n27 <- apply(new.df[,c('Close','close.sma.n27', 'close.sd.n27')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n28 <- apply(new.df[,c('Close','close.sma.n28', 'close.sd.n28')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n29 <- apply(new.df[,c('Close','close.sma.n29', 'close.sd.n29')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n30 <- apply(new.df[,c('Close','close.sma.n30', 'close.sd.n30')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n31 <- apply(new.df[,c('Close','close.sma.n31', 'close.sd.n31')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n32 <- apply(new.df[,c('Close','close.sma.n32', 'close.sd.n32')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n33 <- apply(new.df[,c('Close','close.sma.n33', 'close.sd.n33')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n34 <- apply(new.df[,c('Close','close.sma.n34', 'close.sd.n34')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n35 <- apply(new.df[,c('Close','close.sma.n35', 'close.sd.n35')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n36 <- apply(new.df[,c('Close','close.sma.n36', 'close.sd.n36')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n37 <- apply(new.df[,c('Close','close.sma.n37', 'close.sd.n37')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n38 <- apply(new.df[,c('Close','close.sma.n38', 'close.sd.n38')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n39 <- apply(new.df[,c('Close','close.sma.n39', 'close.sd.n39')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n40 <- apply(new.df[,c('Close','close.sma.n40', 'close.sd.n40')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n41 <- apply(new.df[,c('Close','close.sma.n41', 'close.sd.n41')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n42 <- apply(new.df[,c('Close','close.sma.n42', 'close.sd.n42')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n43 <- apply(new.df[,c('Close','close.sma.n43', 'close.sd.n43')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n44 <- apply(new.df[,c('Close','close.sma.n44', 'close.sd.n44')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n45 <- apply(new.df[,c('Close','close.sma.n45', 'close.sd.n45')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n46 <- apply(new.df[,c('Close','close.sma.n46', 'close.sd.n46')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n47 <- apply(new.df[,c('Close','close.sma.n47', 'close.sd.n47')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n48 <- apply(new.df[,c('Close','close.sma.n48', 'close.sd.n48')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n49 <- apply(new.df[,c('Close','close.sma.n49', 'close.sd.n49')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n50 <- apply(new.df[,c('Close','close.sma.n50', 'close.sd.n50')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n51 <- apply(new.df[,c('Close','close.sma.n51', 'close.sd.n51')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n52 <- apply(new.df[,c('Close','close.sma.n52', 'close.sd.n52')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n53 <- apply(new.df[,c('Close','close.sma.n53', 'close.sd.n53')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n54 <- apply(new.df[,c('Close','close.sma.n54', 'close.sd.n54')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n55 <- apply(new.df[,c('Close','close.sma.n55', 'close.sd.n55')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n56 <- apply(new.df[,c('Close','close.sma.n56', 'close.sd.n56')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n57 <- apply(new.df[,c('Close','close.sma.n57', 'close.sd.n57')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n58 <- apply(new.df[,c('Close','close.sma.n58', 'close.sd.n58')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n59 <- apply(new.df[,c('Close','close.sma.n59', 'close.sd.n59')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$close.zscore.n60 <- apply(new.df[,c('Close','close.sma.n60', 'close.sd.n60')], 1, function(x) { (x[1]-x[2])/x[3] } )

# Convert all NA to 0
new.df[is.na(new.df)] <- 0

# Calculate quartiles, where close is relation to range (Close – High) / (High – Low)
#new.df$quartile <- apply(new.df[,c('Close', 'Low', 'High')], 1, function(x) { (x[1]-x[2])/(x[3]-x[2])} )

# Calculate Returns from open to close
new.df$ocret <- apply(new.df[,c('Open', 'Close')], 1, function(x) { (x[2]-x[1])/x[1]} )

# Calculate Close-to-Close returns
new.df$clret <- ROC(new.df$Close, type = c("discrete"))
new.df$clret[1] <- 0

# Name indicators
indicator <- new.df$close.zscore.n12

# Create Long Signals
new.df$signal <- ifelse(indicator < -.2, 1, 0)

######## Fixed 'n' day hold logic ##########
# Variable for loop
indicator <- new.df$signal

# Create Vector From Signal
signal.1 <- c(indicator)

# Set variable for number of days to hold
n.day <- 4

# Loop for fixed 'n' day hold
res <- NULL
while (length(res) < length(signal.1)) {
if (signal.1[length(res)+1] == 1) {
res <- c(res, rep(1,n.day))
} else {
res <- c(res, 0)
}
}
res <- res[1:length(signal.1)]
new.df <- data.frame(new.df,response = res)

# lag signal by one forward day to signal entry next day
new.df$response %
group_by(RunID) %>%
mutate(equity.curve = ifelse(response == 0, 0,
ifelse(row_number() == 1, ocret, clret))) %>%
ungroup() %>%
select(-RunID)

# Pull select columns from data frame to make XTS whilst retaining formats
xts1 = xts(new.df$equity.curve, order.by=as.Date(new.df$Date, format=”%m/%d/%Y”))
xts2 = xts(new.df$clret, order.by=as.Date(new.df$Date, format=”%m/%d/%Y”))

# Join XTS together
compare <- cbind(xts1,xts2)

# Use the PerformanceAnalytics package for trade statistics
# install.packages("PerformanceAnalytics")
require(PerformanceAnalytics)
colnames(compare) <- c("Mean Reversion","Buy And Hold")
charts.PerformanceSummary(compare,main="Cumulative Returns", wealth.index=TRUE, colorset=rainbow12equal)
#png(filename="20090606_rsi2_performance_updated.png", 720, 720)
performance.table <- rbind(table.AnnualizedReturns(compare), maxDrawdown(compare), CalmarRatio(compare),table.DownsideRisk(compare))
drawdown.table <- rbind(table.Drawdowns(compare))
#dev.off()
logRets %
group_by(RunID) %>%
dplyr::mutate(perc.output = ifelse(response == 0, 0,
ifelse(row_number() == n(),
(last(Close) – first(Open))/first(Open), 0))) %>%
ungroup() %>%
select(-RunID)

# All NA to 0
new.df[is.na(new.df)] <- 0
# Win / Loss %
# 1 Day Hold Trades
winning.trades '0', na.rm=TRUE)
losing.trades <- sum(new.df$equity.curve < '0', na.rm=TRUE)
total.days <- NROW(new.df$equity.curve)
# Multi Day Hold Trades
multi.winning.trades '0', na.rm=TRUE)
multi.losing.trades <- sum(new.df$perc.output < '0', na.rm=TRUE)
multi.total.days <- NROW(new.df$perc.output)
multi.losing.trades
# % Time Invested (Same column for 1 day and multi hold trades)
time.invested <- (winning.trades + losing.trades) / total.days
winning.trades + losing.trades
winning.trades
losing.trades

# Calcualte win loss %
# 1 Day Hold Trades
total <- winning.trades + losing.trades
win.percent <- winning.trades / total
loss.percent <- losing.trades / total
# Multi Day Hold Trades
multi.total <- multi.winning.trades + multi.losing.trades
multi.win.percent <- multi.winning.trades / multi.total
multi.loss.percent <- multi.losing.trades / multi.total
# Calculate Consecutive Wins Loss
# 1 Day Hold Trades
remove.zero <- new.df[-which(new.df$equity.curve == 0 ), ] # removing rows 0 values
consec.wins <- max(rle(sign(remove.zero$equity.curve))[[1]][rle(sign(remove.zero$equity.curve))[[2]] == 1])
consec.loss <- max(rle(sign(remove.zero$equity.curve))[[1]][rle(sign(remove.zero$equity.curve))[[2]] == -1])

# Multi Day Hold Trades
multi.remove.zero <- new.df[-which(new.df$perc.output == 0 ), ] # removing rows 0 values
multi.consec.wins <- max(rle(sign(multi.remove.zero$perc.output))[[1]][rle(sign(multi.remove.zero$perc.output))[[2]] == 1])
multi.consec.loss <-max(rle(sign(multi.remove.zero$perc.output))[[1]][rle(sign(multi.remove.zero$perc.output))[[2]] == -1])

# Calculate Summary Statistics
# 1 Day Hold Trades OR all days if multi holding
average.trade <- mean(new.df$equity.curve)
average.win 0])
average.loss <- mean(new.df$equity.curve[new.df$equity.curve <0])
median.win 0])
median.loss <- median(new.df$equity.curve[new.df$equity.curve <0])
max.gain <- max(new.df$equity.curve)
max.loss <- min(new.df$equity.curve)
win.loss.ratio <- winning.trades / abs(losing.trades)
summary <- cbind(winning.trades,losing.trades,win.percent,loss.percent,win.loss.ratio,time.invested,average.trade,average.win,average.loss,median.win,median.loss,consec.wins,consec.loss,max.gain,max.loss)
summary <- as.data.frame(summary)
colnames(summary) <- c("Winning Trades","Losing Trades","Win %","Loss %","Win Loss Ratio","Time Invested","Average Trade","Average Win","Average Loss","Median Gain","Median Loss","Consec Wins","Consec Loss","Maximum Win","Maximum Loss")
print(summary)

# Multi Day Hold Trades
multi.average.trade <- mean(new.df$perc.output)
multi.average.win 0])
multi.average.loss <- mean(new.df$perc.output[new.df$perc.output <0])
multi.median.win 0])
multi.median.loss <- median(new.df$perc.output[new.df$perc.output <0])
multi.win.loss.ratio <- multi.average.win / abs(multi.average.loss)
multi.max.gain <- max(new.df$perc.output)
multi.max.loss <- min(new.df$perc.output)
multi.summary <- cbind(multi.winning.trades,multi.losing.trades,multi.win.percent,multi.loss.percent,multi.win.loss.ratio,time.invested,multi.average.trade,multi.average.win,multi.average.loss,multi.median.win,multi.median.loss,multi.consec.wins,multi.consec.loss,multi.max.gain,multi.max.loss)
multi.summary <- as.data.frame(multi.summary)
colnames(multi.summary) <- c("Winning Trades","Losing Trades","Win %","Loss %","Win Loss Ratio","Time Invested","Average Trade","Average Win","Average Loss","Median Gain","Median Loss","Consec Wins","Consec Loss","Maximum Win","Maximum Loss")
print(multi.summary)
print(performance.table)
print(drawdown.table)
table.Drawdowns(xts1, top=10)
charts.PerformanceSummary(compare,main="Cumulative Returns",wealth.index=TRUE,colorset=rainbow12equal)

# Write output to file
write.csv(new.df,file="D:/R Projects/spy_mean.csv")

Advertisements

Hurst Exponent in R

The Hurst Exponent is a statistical testing method which tests if a series is mean reverting, trending or in geometric brownian motion. Using the hurst exponent a time series can be categorized by the following:

Hurst Values < 0.5 = mean reverting

Hurst Vales = 0.5 = geometric brownian motion

Hurst Values > 0.5 = trending

The hurst exponent falls between a range of 0 to 1. Where values closer to 0 signal stronger mean reversion and values closer to 1 signal stronger trending behavior.

Using R, we can calculate the hurst exponent:

# Hurst Exponent
# Andrew Bannerman
# 8.11.2017

require(lubridate)
require(dplyr)
require(magrittr)
require(zoo)
require(lattice)

# Data path
data.dir <- "G:/R Projects"
output.dir <- "G:/R Projects"
data.read.spx <- paste(data.dir,"SPY.csv",sep="/")

# Read data
read.spx <- read.csv(data.read.spx,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE)

# Convert Values To Numeric
cols <-c(3:8)
read.spx[,cols] %<>% lapply(function(x) as.numeric(as.character(x)))

# Convert Date Column [1]
read.spx$Date <- ymd(read.spx$Date)

# Make new data frame
new.df <- data.frame(read.spx)

# Subset Date Range
new.df <- subset(new.df, Date >= "2000-01-06" & Date <= "2017-08-06")

#Create lagged variables

lags <- 2:20

# Function for finding differences in lags. Todays Close - 'n' lag period
getLAG.DIFF <- function(lagdays) {
  function(new.df) {
    c(rep(NA, lagdays), diff(new.df$Close, lag = lagdays, differences = 1, arithmetic = TRUE, na.pad = TRUE))
  }
}
# Create a matrix to put the lagged differences in
lag.diff.matrix <- matrix(nrow=nrow(new.df), ncol=0)

# Loop for filling it
for (i in 2:20) {
  lag.diff.matrix <- cbind(lag.diff.matrix, getLAG.DIFF(i)(new.df))
}

# Rename columns
colnames(lag.diff.matrix) <- sapply(2:20, function(n)paste("lagged.diff.n", n, sep=""))

# Bind to existing dataframe
new.df <-  cbind(new.df, lag.diff.matrix)

# Calculate Variances of 'n period' differences
variance.vec <- apply(new.df[,9:ncol(new.df)], 2, function(x) var(x, na.rm=TRUE))

# Linear regression of log variances vs log lags
log.linear <- lm(formula = log(variance.vec) ~ log(lags))  
# Print general linear regression statistics  summary(log.linear)  
# Plot log of variance 'n' lags vs log time  
xyplot(log(variance.vec) ~ log(lags),         
main="SPY Daily Price Differences Variance vs Time Lags",        
 xlab = "Time",        
 ylab = "Logged Variance 'n' lags",         grid = TRUE,        
 type = c("p","r"),col.line = "red",         
abline=(h = 0)) 

hurst.exponent = coef(log.linear)[2]/2
hurst.exponent 

# Write output to file write.csv(new.df,file="G:/R Projects/hurst.csv")

For a little explanation of what is actually going on here: 1. First we are computing the lagged difference in close prices for the SPY. We do this by taking today’s SPY close – 2 day lag. This gives us the price difference between today’s SPY close and the SPY close 2 days ago. We do this for each lag 2:20. So for lag 3, this will take today’s SPY close – SPY Close 3 days ago. Repeat the process through to lag 20 (2:20). This will roll through the entire series. This is evident with head(new.df)

 > head(new.df)
           Date Ticker     Open     High      Low    Close  Volume Open.Interest lagged.diff.n2 lagged.diff.n3 lagged.diff.n4 lagged.diff.n5
1753 2000-01-06    SPY 139.2124 141.0819 137.3430 137.3430 6245656           138             NA             NA             NA             NA
1754 2000-01-07    SPY 139.8979 145.3193 139.6486 145.3193 8090507           146             NA             NA             NA             NA
1755 2000-01-10    SPY 145.8178 146.4410 144.5715 145.8178 5758617           146        8.47488             NA             NA             NA
1756 2000-01-11    SPY 145.3816 145.6932 143.0760 143.8861 7455732           144       -1.43326        6.54310             NA             NA
1757 2000-01-12    SPY 144.1976 144.1976 142.4528 142.6398 6932185           143       -3.17808       -2.67956        5.29680             NA
1758 2000-01-13    SPY 144.0730 145.3193 142.8267 144.5715 5173588           145        0.68547       -1.24631       -0.74779        7.22857

There are leading NA’s depending on which lag period we used. This then rolls through the series taking the lagged differences.

2. After we will have all of our lagged differences from 2:20 (or any other range chosen)

3. We then for each ‘n’ lag period, compute the variance for that particular lagged period. This will be the variance of the total length of each lagged difference. We can see this by printing the variance vector:

 > variance.vec
 lagged.diff.n2  lagged.diff.n3  lagged.diff.n4  lagged.diff.n5  lagged.diff.n6  lagged.diff.n7  lagged.diff.n8  lagged.diff.n9 lagged.diff.n10
       4.288337        6.065315        7.823918        9.552756       11.155789       12.702647       14.185067       15.724892       17.180618
lagged.diff.n11 lagged.diff.n12 lagged.diff.n13 lagged.diff.n14 lagged.diff.n15 lagged.diff.n16 lagged.diff.n17 lagged.diff.n18 lagged.diff.n19
      18.651980       20.167477       21.854415       23.647368       25.289570       26.751552       28.403188       30.110954       31.620225
lagged.diff.n20
      33.130844

This shows us the variance for each of our lagged differences from 2 to 20.

4. After we plot the the log variance vs the log lags.

# Linear regression of log variances vs log lags
log.linear <- lm(formula = log(variance.vec) ~ log(lags))

# Plot log of varaince 'n' lags vs log time
xyplot(log(variance.vec) ~ log(lags),
       main="SPY Daily Price Differences Variance vs Time Lags",
       xlab = "Log Lags",
       ylab = "Logged Variance 'n' lags",
       grid = TRUE,
       type = c("p","r"),col.line = "red",
       abline=(h = 0))

Rplot30

5. The hurst exponent is log(lags) estimate / 2 (the slope / 2)

hurst

For date range: “2000-01-06” to “2017-08-06” at our chosen lags of 2:20 days:

SPY Hurst exponent is 0.443483. Which is mean reverting.

Another method is to compute a rolling simple hurst exponent over a rolling ‘n’ day period.

The calculation for simple Hurst:

# Function For Simple Hurst Exponent
x <- new.df$Close # set x variable

simpleHurst <- function(y){
  sd.y <- sd(y)
  m <- mean(y)
  y <- y - m
  max.y <- max(cumsum(y))
  min.y <- min(cumsum(y))
  RS <- (max.y - min.y)/sd.y
  H <- log(RS) / log(length(y))
  return(H)
}
simpleHurst(x) # Obtain Hurst exponent for entire series

What we can do is apply the simple hurst function using rollapply in R over ‘n’ day rolling look back period, we do this using our created getHURST function:

# Hurst Exponent
# Andrew Bannerman
# 8.11.2017

require(lubridate)
require(dplyr)
require(magrittr)
require(zoo)
require(ggplot2)

# Data path
data.dir <- "G:/R Projects"                                #Enter your directry here of you S&p500 data.. you need / between folder names not \
data.read.spx <- paste(data.dir,"SPY.csv",sep="/")

# Read data to read.spx data frame
read.spx <- read.csv(data.read.spx,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE)

# Make dataframe
new.df <- data.frame(read.spx)

# Convert Values To Numeric
cols <-c(3:8)
new.df[,cols] %<>% lapply(function(x) as.numeric(as.character(x)))

#Convert Date Column [1]
new.df$Date <- ymd(new.df$Date)

# Use for subsetting by date
new.df <- subset(new.df, Date >= "2000-01-06" & Date <= "2017-08-06")    # Change date ranges
#new.df <- subset(new.df, Date >= as.Date("1980-01-01"))                                     # Choose start date to present

# Function For Simple Hurst Exponent
x <- new.df$Close # set x variable

simpleHurst <- function(y){
  sd.y <- sd(y)
  m <- mean(y)
  y <- y - m
  max.y <- max(cumsum(y))
  min.y <- min(cumsum(y))
  RS <- (max.y - min.y)/sd.y
  H <- log(RS) / log(length(y))
  return(H)
}
simpleHurst(x) #Obtain Hurst exponent for entire series

# Calcualte rolling hurst exponent for different 'n' periods
getHURST <- function(rolldays) {
  function(new.df) {
    rollapply(new.df$Close,
              width = rolldays,               # width of rolling window
              FUN = simpleHurst,
              fill = NA,
              align = "right")
  }
}
# Create a matrix to put the roll hurst in
roll.hurst.matrix <- matrix(nrow=nrow(new.df), ncol=0)

# Loop for filling it
for (i in 2:252) {
  roll.hurst.matrix <- cbind(roll.hurst.matrix, getHURST(i)(new.df))
}

# Rename columns
colnames(roll.hurst.matrix) <- sapply(2:252, function(n)paste("roll.hurst.n", n, sep=""))

# Bind to existing dataframe
new.df <-  cbind(new.df, roll.hurst.matrix)

# Line Plot of rolling hurst
ggplot(data=new.df, aes(x = Date)) +
  geom_line(aes(y = roll.hurst.n5), colour = "black") +
labs(title="Hurst Exponent - Rolling 5 Days") +
  labs(x="Date", y="Hurst Exponent")  

# Plot Roll Hurst Histogram
qplot(new.df$roll.hurst.n5,
      geom="histogram",
      binwidth = 0.005,
      main = "Simple Hurst Exponent - Rolling 5 Days",
      fill=I("grey"),
      col=I("black"),
      xlab = "Hurst Exponent")

# Plot S&P500 Close
ggplot(data=new.df, aes(x = Date)) +
  geom_line(aes(y = Close), colour = "darkblue") +
  ylab(label="S&P500 Close") +
  xlab("Date") +
  labs(title="S&P500") 

# Write output to file
write.csv(new.df,file="G:/R Projects/hurst.roll.csv")

This calculates the simple hurst exponent over an ‘n’ day look back period. As we can see from the plotted histograms, shorter time frames for the SPY show hurst exponents < 0.50 and if we extend our period to longer time frames the SPY fits the trending hurst category closer to hurst 1.

hurst 3

hurst5

hurst10

hurst 6 mo

hurst 1 year

Using this information we may design (fit) a model which captures the nature of the series under examination. In this case it would make sense to build mean reversion models on short time periods for SPY and develop trending or momentum models for the longer time frames.

References
Algorithmic Trading: Winning Strategies and Their Rationale – May 28, 2013, by Ernie Chan