Estimating the Hurst Exponent Using R/S Range

In this post we will estimate the Hurst exponent using the R/S method. The Hurst exponent determines the long range memory of a time series (more below). If a series has no memory ie if each point in time is independent from previous points in time then its said to be more of a random process. Examples of random processes are markov processes, Brownian motion and white noise.

A series which trends has autocorrelation where one point in time is correlated with a lagged version of itself. The autocorrelation of a series may be calculated with the following formula:

auto correlation

In English the top part of the equation (numerator) is the covariance where Yi is the series and Yi+k some lagged version of itself. The bottom part of the equation (denominator) is the variance.

So we simply calculate the covariance of the series and a lagged version of itself and divide by the variance.

We may plot the autocorrelation of the SP500 futures contract close prices.
source: https://www.quandl.com/data/CHRIS/CME_SP1-S-P-500-Futures-Continuous-Contract-1-SP1-Front-Month

Can not validate the accuracy of this data but good enough for illustration purposes.

auto_cor.png

This is far from random and close prices have a high degree of correlation between some lag of itself.

Now we may estimate the Hurst exponent over a SP futures return series using the rescaled analysis method (RS). This is the original method calculated by Harold Edwin Hurst who devised the formula to measure the mean reversion tendency of the Nile river.

We shall estimate H for a 1 bar return series and a 100 bar return series.

The R/S calculation steps are depicted on Wikipedia.

The Hurst exponent is more of an estimate than an absolute calculation. Corresponding H values signal:

H values less than 0.5 = anti persistent behavior, mean reverting
H values of 0.5 = random process
H values greater than 0.5 = persistent behavior, trending

The Hurst exponent procedure is as follows which calculates the R/S range and estimates the Hurst exponent by regression log(R/S) and log(n-lag):

1. Calculate a return series, We will estimate the Hurst exponent on a 1 bar return series. The thesis here is that it will be close to a random process if indeed each bar is independent from the last, or no auto correlation, H=0.5. If we pick a longer return series such as 100 we would expect to see higher H values and higher auto correlation coefficients.
2. Hurst exponent is estimated by regression of a power law of log(R/S) vs log(n-lag). The Slope being the Hurst exponent. For that reason logarithmically spaced block sizes were chosen at lags: [8,16,32,64,128,256,512,1024]
2. Calculate the mean for each block size. In this case 1:8, 2:9 etc.. or i-n+1:i. Do this for each block size.
3. For said block size, subtract the mean from each value in the return series
4. Sum the deviations from the mean
5. Find the maximum and minimum of the sum of deviations from the mean
6. Find R, the range of summed deviations from the mean by subtracting maximum – minimum
7. Calculate the standard deviation of the deviations from the mean
8. Calculate R/S, the rescaled range by dividing R by the stdev of deviations from the mean
9. After rolling through the series calculating the RS value along each block size. We take the mean RS value for each block size. So for each lag (block size) we have one mean value.
10. Perform a regression log2(mean_RS) vs log2(n_lag). The slope is the hurst exponent. The procedure above is detailed below using Julia language:


####################################################################################
# Rolling Hurst
####################################################################################

# initialize outputs
m = Array{Float64}(length(d_es_close),0)
log_n_out = m
out_RS = m

# Set lags (range or specific)
max_lag = 100
n = 200
lags = n:max_lag
# or specific lags #comment out where necessary
lags = [8,16,32,64,128,256,512,1024]
# Specify return series lag
n_lag = 2000

i=1
j=1
c=1
    for j = lags
        n=lags[c]
    # Calculate returns of the series
    #n_lag = lags[c] # set n_lag 1 for 1 day returns
    rets = zeros(d_es_close)
    n_lag = n_lag
    for i in n_lag+1:length(d_es_close)
        rets[i] = ((d_es_close[i] / d_es_close[i-n_lag])-1) # rets[i] = ((d_es_close[i] / d_es_close[i-n_lag+1])-1)
    end
    #rets = d_es_close
    # Find mean width of lookback
    mean_out = zeros(rets)
    for i = n:size(rets,1)
                mean_out[i] = mean(rets[i-n+1:i])
            end
    # Subtract deviations from the mean
    dev_mean_out = zeros(mean_out)
    for i = n:size(mean_out,1)
                dev_mean_out[i] = (rets[i] - mean_out[i])
            end
    # Roll sum the deviations from the mean
    sum_out = zeros(dev_mean_out)
    for i = n:size(dev_mean_out,1)
            sum_out[i] = sum(dev_mean_out[i-n+1:i])
        end
    # Find the maximum and minimum of sum of the mean deviations
    max_out = zeros(sum_out)
    for i = n:size(sum_out,1)
                max_out[i] = maximum(sum_out[i-n+1:i])
            end
    min_out = zeros(sum_out)
    for i = n:size(sum_out,1)
                min_out[i] = minimum(sum_out[i-n+1:i])
            end
    # R = Range, max - min
    R_out = zeros(dev_mean_out)
    for i= n:size(dev_mean_out,1)
        R_out[i] = max_out[i] - min_out[i]
    end
    # Rolling standard deviation of (returns) the mean
    stdev_out = zeros(rets)
    for i = n:size(rets,1)
            stdev_out[i] = sqrt(var(dev_mean_out[i-n+1:i]))
        end
    # Calculate rescaled range (Range (R_out) / stdev of returns )
    RS_out = zeros(R_out)
    for i = n:size(R_out,1)
            RS_out[i] = R_out[i] / stdev_out[i]
        end
    # Calculate log_n (n)
    index = fill(n,length(rets))
    log_n = zeros(rets)
    for i =n:size(index,1)
        log_n[i] = log2(index[i])
    end

# out
log_n_out = hcat(log_n_out, log_n)
#log_rs_out = hcat(log_rs_out, log_RS)
out_RS = hcat(out_RS, RS_out) # re-scaled range

c = c+1
end

# access dims of the matrix
# row ,col
#log_rs_out[20,:]

# Calculate expected value for R/S over various n
# Take the rolling mean over each varying n lag at n width
expected_RS = zeros(size(out_RS,1), size(out_RS,2))
n=100
for j = 1:size(out_RS,2)  # loop on column dimension
    for i = n:size(out_RS,1) # loop on row dimension
            expected_RS[i,j] =  mean(out_RS[i-n+1:i,j])
            end
        end

# log 2
expected_log_RS = zeros(size(out_RS,1), size(out_RS,2))
c=1
for j = 1:size(expected_log_RS,2)  # loop on column dimension
    for i = n:size(expected_log_RS,1) # loop on row dimension
            expected_log_RS[i,j] =  log2(expected_RS[i,j])
            end
                        c=c+1
        end

    # Regression slope of log(n) and expected value of log(R/S)
    # x = log(n), y = log(R/S)
    b_slope = zeros(size(out_RS,1))
    A_intercept = zeros(size(out_RS,1))

    for i = n:size(out_RS,1)
        xi = log_n_out[i,:]  # grab the row for varying lags of log_n
        yi = expected_log_RS[i,:] # grab the row for varying lags of r/s value
        # Mean of X (Mx)
        Mx = mean(xi)
        # Mean of Y (My)
        My = mean(yi)
        # Standard deviation of X (sX)
        sX = sqrt(var(xi))
        # Standard Deviation of Y (sY)
        sY = sqrt(var(yi))
        # Correlation of x and y  (r)
        r = cor(xi,yi)
        # slope (b) = r * (sY/sX)
        b = r * (sY/sX)
    # find intercept A = MY - bMX
    A = My - (b*Mx)

# out
b_slope[i] = b
A_intercept[i] = A
end

Using the code above we may plot the hurst exponent for a 1 bar return series.

Hurst (2)

If we have a random process, successive points in time are independent from each other. We see Hurst values fluctuated around the .50 area with H .75 in 2009 during a strong trending period.

We may see the effect of analyzing a longer return stream, lets say 100 bar return series:

H_close.png

We see Hurst values closer to 1 for a 100 bar return series.

Furthermore to see how the Hurst exponent and auto correlation are related, we may calculate a rolling auto correlation:

# Rolling Autocorrelation
# Sliding window (k width)
mean_out = zeros(size(d_es_close,1))
var_out = zeros(size(d_es_close,1))
cov_out = zeros(size(d_es_close,1))
autocor_out = zeros(size(d_es_close,1))
devmean_out = zeros(size(d_es_close,1))
n=1000
k = 1 # set lag
n_lag = 100 # return series lag

# Calculate 1 bar return series
rets = zeros(size(d_es_close,1))
for i in n_lag+1:length(d_es_close)
    rets[i] = ((d_es_close[i] / d_es_close[i-n_lag])-1) # rets[i] = ((d_es_close[i] / d_es_close[i-n_lag+1])-1)
end

# lag the series by k
lagged_series = [fill(0,k); rets[1:end-k]]

# Calculate the mean of the rolling  sample
for i = n:size(rets,1)
    mean_out[i] = mean(rets[i-n+1:i])
end

# Calculate deviations from the mean
for i = n:size(rets,1)
devmean_out[i] = rets[i] - mean_out[i]
end

# Calculate rolling variance
for i = n:size(rets,1)
    var_out[i] = var(rets[i-n+1:i])
end

# Calculate rolling covariance
for i = n:size(rets,1)
    if i+k >= size(rets,1)
        break
    end
    cov_out[i] = cov(rets[i-n+1:i],lagged_series[i-n+1:i])
end

# Rolling Autocorrelation
for i =n:size(rets,1)
    autocor_out[i] = cov_out[i] / var_out[i]
end

For a 1 bar return series at lag, k=1

auto_cor_1_bar

We see very little correlation from one point in time to the next. The respective hurst exponent is around 0.40 to 0.55 range.

And the auto correlation for a 100 bar return series at lag , k=1

auto_cor_100_bar.png

We see strong correlation at the 100 bar return series with correlation coefficients greater than .96 with respective hurst exponents closer to 1.0.

So whats the point in all of this?

As depicted above information pertaining to the (non) randomness of series can may be of some benefit when designing models to fit to the data. The autocorrelation / Hurst exponent may be used to compliment existing strategies signalling the type of regime the market currently is under and may add to strategies which better suit current conditions.

As a  note of interest. On a 1 bar daily return series for the SPY ETF at lags 2:20:

H_two_twenty

Values of H are below .50 and in mean reversion territory. Lets shorten the lags one more time 2:10.

H_two_ten

We see significant mean reversion H values. At holding periods sub 20 bars, a mean reversion model would be best suited.

A way I like to think of this is how quickly does something diffuse from the mean? A mean reversion series is almost stuck in the mud, and the prices are centered around a mean. I like to think of it as a compressed coil or in the fractal dimension, pertains to the roughness of a surface where mean reversion is most certainly rough. On the other side a strong trend has a faster rate of diffusion from the mean or in the fractal dimension has a smooth surface. The S&P500 is a blend of this action where on a shorter holding period, there is mean reversion tendency (rough) but on longer time horizon the market displays trending behavior (smooth).

In closing and to recap. We see how the Hurst exponent and the auto correlation of a series are related to each other. Random processes show no dependence from one point in time to a successive point in time, or having no correlation with a subsequent Hurst value of 0.5. On the other side of this, a series showing persistent behavior, a trending series, displayed high correlation coefficients where successive points in time were correlated with each other and subsequent H values closer to 1.

Code may be found on my github.

XIV | VXX – Testing Mean Reversion, Momentum, Random Walk – Rolling Hurst Exponent

In addition to the previous posts studying volatility strategies here and here we aim to study the nature of the XIV,VXX series. We subject the series to the Hurst exponent and we follow the same procedure as Ernie Chan (2013) where we take the lagged price differences and perform a regression on the log time lags vs log variance of the lagged differences, an example of this here.

For this post we will download VIX futures data from CBOE website and join synthetic XIV and XIV data to make the largest data set we possibly can per this previous post. After we have our XIV and VXX data we then proceed to create the hurst exponent of each series and the procedure for this is as follows:

1. Compute lagged differences on varying time lags. For example. lag 2 = todays close price – close price 2 days ago. lag 3 = todays close price – close price 3 days ago.
2. Next, Compute the variance of the lagged differences. Ernie chan recommends at least 100 days of data for this. Compute variance over a period of 100 days for each lagged difference.
3. Perform linear regression of the x,y,  log(time_lags) ~ log(variance_lagged_differences) and divide the slope by 2 to obtain the hurst exponent.

We compute the procedure above on a rolling basis and the look back chosen for the variance is 100 trading days. We also use the R package RcppEigen and use fastlm to perform a rolling linear regression. The code that achieves this:

###############################
# Hurst Exponent (varying lags)
###############################

require(magrittr)
require(zoo)
require(lattice)

## Set lags
lags <- 2:(252*6)

# Function for finding differences in lags. Todays Close - 'n' lag period
getLAG.DIFF <- function(lagdays) {
  function(term.structure.df) {
    c(rep(NA, lagdays), diff(term.structure.df$vxx_close, lag = lagdays, differences = 1, arithmetic = TRUE, na.pad = TRUE))
  }
}
# Create a matrix to put the lagged differences in
lag.diff.matrix <- matrix(nrow=nrow(term.structure.df), ncol=0)

# Loop for filling it
for (i in lags) {
  lag.diff.matrix <- cbind(lag.diff.matrix, getLAG.DIFF(i)(term.structure.df))
}

# Rename columns
colnames(lag.diff.matrix) <- sapply(lags, function(n)paste("lagged.diff.n", n, sep=""))

# Bind to existing dataframe
term.structure.df <-  cbind(term.structure.df, lag.diff.matrix)
head(term.structure.df,25)

############################################################
# Calculate rolling variances of 'n period' differences
# Set variance look back to 100 days
############################################################
# Convert NA to 0
term.structure.df[is.na(term.structure.df)] <- as.Date(0)

get.VAR <- function(varlag) {
  function(term.structure.df) {
    runVar(term.structure.df[,paste0("lagged.diff.n", lags[i])], y = NULL, n = 100, sample = TRUE, cumulative = FALSE)
  }
}
# Create a matrix to put the variances in
lag.var.matrix <- matrix(nrow=nrow(term.structure.df), ncol=0)

# Loop for filling it
for (i in 1:length(lags)) {
  lag.var.matrix <- cbind(lag.var.matrix, get.VAR(i)(term.structure.df))
}

# Rename columns
colnames(lag.var.matrix) <- sapply(lags, function(n)paste("roll.var.diff.", n, sep=""))

# Bind to existing dataframe
term.structure.df <-  cbind(term.structure.df, lag.var.matrix)

########################################
# Subset to remove all leading NA
########################################
#NonNAindex <- which(!is.na(term.structure.df))
#set_lag_threshold <- 50 # Set Variance
#na <- which(!is.na(term.structure.df[,paste0("roll.var.diff.", set_lag_threshold)]))
#firstNonNA <- min(na)
#term.structure.df<-term.structure.df[firstNonNA:nrow(term.structure.df),]

########################################
# Rolling linear regression to compute hurst exponent
########################################
variance <- list()
lag.vec <- c(2:30)  # Select short term lags
# Select column selection
for (i in 1:nrow(term.structure.df)) {
  variance[i] <- list(c(term.structure.df$roll.var.diff.2[i],
                        term.structure.df$roll.var.diff.3[i],
                        term.structure.df$roll.var.diff.4[i],
                        term.structure.df$roll.var.diff.5[i],
                        term.structure.df$roll.var.diff.6[i],
                        term.structure.df$roll.var.diff.7[i],
                        term.structure.df$roll.var.diff.8[i],
                        term.structure.df$roll.var.diff.9[i],
                        term.structure.df$roll.var.diff.10[i],
                        term.structure.df$roll.var.diff.11[i],
                        term.structure.df$roll.var.diff.12[i],
                        term.structure.df$roll.var.diff.13[i],
                        term.structure.df$roll.var.diff.14[i],
                        term.structure.df$roll.var.diff.15[i],
                        term.structure.df$roll.var.diff.16[i],
                        term.structure.df$roll.var.diff.17[i],
                        term.structure.df$roll.var.diff.18[i],
                        term.structure.df$roll.var.diff.19[i],
                        term.structure.df$roll.var.diff.20[i],
                        term.structure.df$roll.var.diff.21[i],
                        term.structure.df$roll.var.diff.22[i],
                        term.structure.df$roll.var.diff.23[i],
                        term.structure.df$roll.var.diff.24[i],
                        term.structure.df$roll.var.diff.25[i],
                        term.structure.df$roll.var.diff.26[i],
                        term.structure.df$roll.var.diff.27[i],
                        term.structure.df$roll.var.diff.28[i],
                        term.structure.df$roll.var.diff.29[i],
                        term.structure.df$roll.var.diff.30[i]))
}

#Initialize list, pre allocate memory
results<-vector("list", length(variance))
hurst<-vector("list", length(variance))
library(RcppEigen)
i=1
for(i in 1:length(variance)){
  results[[i]]<-fastLm( log(lag.vec) ~ log(variance[[i]]), data=variance)
  hurst[[i]]<- coef(results[[i]])[2]/2
  ptm0 <- proc.time()
  Sys.sleep(0.1)
  ptm1=proc.time() - ptm0
  time=as.numeric(ptm1[3])
  cat('\n','Iteration',i,'took', time, "seconds to complete")
}

# Join results to data frame
hurst <- do.call(rbind, hurst)
hurst.df <- as.data.frame(hurst)
hurst.df <- data.frame(hurst.df,Date=term.structure.df$Date)
colnames(hurst.df)[1] <- "Hurst"
hurst.df <- subset(hurst.df, Date >= as.POSIXct("2008-04-28") ) # subset data remove leading NA VXX only
# Plot Data
ggplot() +
  geom_line(data=hurst.df ,aes(x=Date,y=Hurst), colour="black") +
  theme_classic()+
  scale_y_continuous(breaks = round(seq(min(hurst.df$Hurst), max(hurst.df$Hurst), by = 0.2),2))+
  scale_x_date(breaks = date_breaks("years"), labels = date_format("%Y"))+
  ggtitle("VXX Hurst Exponent - Daily Bars - Lags 2:30", subtitle = "Regression log(variances) ~ log(time_lags) - Hurst = Coef/2") +
  labs(x="Date",y="Hurst")+
  theme(plot.title = element_text(hjust=0.5),plot.subtitle =element_text(hjust=0.5))+
  #geom_hline(yintercept = 0.5, color = "red", size=0.5,linetype="dashed")+
  geom_rect(aes(xmin=as.Date(head(hurst.df$Date,1)),xmax=as.Date(Inf),ymin=0.5,ymax=Inf),alpha=0.1,fill="green")+
  geom_rect(aes(xmin=as.Date(head(hurst.df$Date,1)),xmax=as.Date(Inf),ymin=-Inf,ymax=0.5),alpha=0.1,fill="orange")+
  geom_rect(aes(xmin=as.Date(head(hurst.df$Date,1)),xmax=as.Date(Inf),ymin=0.48,ymax=0.52),alpha=.7,fill="red")

The output:

Rplot140
Random Walk Band = Red Between .48 and .52. Momentum = Green > 0.52. Mean Reversion = Red < .48.
Rplot142
Random Walk Band = Red Between .48 and .52. Momentum = Green > 0.52. Mean Reversion = Red < .48.

This is for lagged differences of 2 to 30 days. A rolling variance of 100 days. We chose the smaller lagged differences as the strategies going forward will likely hold no longer than 30 days so it makes sense to see the nature of the series on this time period.

We can compute how often XIV and VXX is in a momentum, mean reversion and random walk regime. The code for this:

# Count how often in each regime
momo <- sum(hurst.df$Hurst > 0.52, na.rm=TRUE)
mean.rev <- sum(hurst.df$Hurst < 0.48 , na.rm=TRUE)
random <- sum(hurst.df$Hurst >= 0.48 & hurst.df$Hurst <=.52, na.rm=TRUE)
exact.random <- sum(hurst.df$Hurst >= 0.50 & hurst.df$Hurst <.51, na.rm=TRUE)
total.rows <- NROW(hurst.df)

# Percentage of time in momentum, mean reversion, random walk
momo.perc <- momo / total.rows
mean.rev.perc <- mean.rev / total.rows
random.perc <- random / total.rows
exact.random.perc <- exact.random / total.rows

VXX:

vxx.percs.df <- data.frame ("Momentum, Over 0.50" = momo.perc,"Mean Reversion, Less than 0.5" = mean.rev.perc, "Random Walk Band, 0.48 to 0.52" = random.perc, "Exact Random Walk, 0.50" = exact.random.perc) vxx.percs.df > vxx.percs.df
  Momentum..Over.0.50 Mean.Reversion..Less.than.0.5 Random.Walk.Band..0.48.to.0.52 Exact.Random.Walk..0.50
1           0.7471264                     0.1395731                      0.1133005              0.02791461<span 				data-mce-type="bookmark" 				id="mce_SELREST_start" 				data-mce-style="overflow:hidden;line-height:0" 				style="overflow:hidden;line-height:0" 			></span>

XIV:

 xiv.percs.df <- data.frame ("Momentum, Over 0.50" = momo.perc,"Mean Reversion, Less than 0.5" = mean.rev.perc, "Random Walk Band, 0.48 to 0.52" = random.perc, "Exact Random Walk, 0.50" = exact.random.perc) xiv.percs.df  > xiv.percs.df
  Momentum..Over.0.50 Mean.Reversion..Less.than.0.5 Random.Walk.Band..0.48.to.0.52 Exact.Random.Walk..0.50
1           0.7081281                     0.2085386                     0.08333333                0.022578

What we see is VXX is in momentum phase 74% of the time and XIV in momentum phase 70% of the time. That is the dominating theme where mean reversion 13% (VXX) and 20% (XIV) and times of random walk 11%(VXX) and 8% (XIV).

If fitting a model to the series itself without using the volatility risk premium / roll yield as entry signals. One may try fitting models based on the theme of momentum.

In subsequent posts we will be applying models based on extracting profits when the market is in contango/backwardation as well as applying models to the series itself. We will also look at the auto correlatation of the series and this will serve as a primer for testing a strategy for robustness.

Thanks for reading!

S&P500 – General Intraday Study

Have arbitrarily chosen 1 minute and 30 second bar for studying the range (high – low) and also studying the mean close to close change per time of day.

I follow this procedure:

1. Calculate High – Low to find the bar range
2. Calculate the Close to Close differences for each bar
3. Group all by time of day
4. Mean for each time of day for the range and close to close differences
5. Plot

If we look at 1 minute range bars first:

Rplot150
Mean High-Low Group By Time 1 Minute Bars – 1998  To present (10.13.2017). Due to image size. Right click and download and zoom around to see the image

What is prominent, if we classify volatility as a range extension. we see that range expands in the am and contracts mid day and again increases post 1300.

We also see notable uptick in range at the (first/Last) 830am /1500  bar and also the 900 and 1300 bar Central Time. I believe the 9am uptick may invite more buyers after waiting for the first 30 minutes to pass to see what directions trades will place their bets.

Post 1300 hours we see the range increase to market closing.

The highest one range bars are the first and last bar and the bars in the morning specifically the 9am bar.

We also see prominently that at every 30 minute intervals, on and half past the hour we see an uptick in the range. This is true for every 10 minute intervals also.

If we take the mean close to close change per time of day:

Rplot156.png
Mean Close to Close Difference 1 Min Bars – April 2017 To Present

We see the the first 4 minutes of the open tend have a mean positive bias and the closing 15 minutes tend to show a slightly negative bias.

If we dig into the 30 second bars looking first at the range:

Rplot153
Mean High-Low Group By Time 30 Second Bars – April 2017 to Present

We see the same theme. First and last bars have the highest range. Again we see range decline into the 9am before an uptick of range. Range declines into the lunch hours and at 1300 we see an uptick in range before steadily increasing to the market close. We also see the increase of range on every hour and half hour intervals (top and bottom of the hour).

If we view the mean close to close difference by time:

Rplot154
Mean Close to Close Difference 30 Second Bars – April 2017 To Present

We see that the first 30 second bar has a notable positive bias. The other bars do not offer much statistical significance.

Rplot157
Mean High-Low Group By Time 15 Second Bars – April 2017 to Present

This is the range of the 15 second bar. The theme is clear. From 830 to 1030 we see the highest range. Lunch time range contracts before again expanding post 1300 to market close.

Free wordpress doesn't allow the images to be viewed clearly and right click save as yields blurred results. If you have an interest in the images above shoot me and email from the contact form and I will email them.

 

S&P500 Seasonal Study + Other Commodities

We study to see if there is seasonality to the S&P500. We perform the procedure below on data from 1928 to present day (10.11.2017)

1. Calculate daily spread of closing prices
2. Group daily spread by month
3. Calculate mean of each month

We simply compute the spread of the the close to close values. We do not use the % returns here, simply close – close for every day in the series.

Next we group all days by their month. We then compute the mean for each grouped month.

The results for the S&P500 are below:

Rplot126

Rplot127

The old adage… ‘Sell In May And Go Away!’ seems to be true.

Other ETF’s:

Rplot134

DIA follows mostly the same seasonal pattern to the S&P500.

Rplot130

The best months for Crude Oil seem to be from Feb through June.

Rplot135

Natural Gas has its worst months in July and August.

Rplot131

Best months for Gold look to be Jan/Feb and August.

Rplot132

 

Silver follows a similar seasonal pattern to Gold.

Commodities tend to exhibit seasonal supply and demand flutuations which are consistently shown in the mean plots above and with a bit of googling may be explained.

In another post we will test for seasonal strategies which will attempt to exploit the above seasonal trends.

Full R Code below:

# S&P500 Seasonal Study 
# Calculate daily price spreads
# Group by month 
# Average each monthly group 

require(lubridate)
require(dplyr)
require(magrittr)
require(TTR)
require(zoo)
require(data.table)
require(xts)
require(ggplot2)
require(ggthemes)

# Data path
data.dir <- "C:/Stock Market Analysis/Market Data/MASTER_DATA_DUMP"
data.read.spx <- paste(data.dir,"$SPX.csv",sep="/")

# Read data
read.spx <- read.csv(data.read.spx,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE)

# Convert Values To Numeric 
cols <-c(3:8)
read.spx[,cols] %<>% lapply(function(x) as.numeric(as.character(x)))

# Convert Date Column [1] to Date format 
read.spx$Date <- ymd(read.spx$Date)

# Subset Date
#read.spx <- subset(read.spx, Date >= as.Date("1960-01-01") ) 

# Compute daily price differences 
# We replicate NA 1 time in order to maintain correct positioning of differences
# Within the data frame
read.spx$close.diff <- c(rep(NA, 1), diff(read.spx$Close, lag = 1, differences = 1, arithmetic = TRUE, na.pad = TRUE))

# Group each daily difference by month
group <- read.spx %>% dplyr::mutate(mymonth = lubridate::month(Date)) %>% group_by(mymonth) 
read.spx <- data.frame(read.spx,group$mymonth)
read.spx <- arrange(read.spx,group.mymonth)

# Duplicate df
for.mean <- data.frame(read.spx)

# Perform mean
mean <- for.mean %<>%
  group_by(group.mymonth) %>%
  summarise(mean=mean(close.diff,na.rm = TRUE))

# Confidence
jan <- subset(read.spx, group.mymonth  == 1)
feb <- subset(read.spx, group.mymonth  == 2)
mar <- subset(read.spx, group.mymonth  == 3)
apr <- subset(read.spx, group.mymonth  == 4)
may <- subset(read.spx, group.mymonth  == 5)
jun <- subset(read.spx, group.mymonth  == 6)
jul <- subset(read.spx, group.mymonth  == 7)
aug <- subset(read.spx, group.mymonth  == 8)
sep <- subset(read.spx, group.mymonth  == 9)
oct <- subset(read.spx, group.mymonth  == 10)
nov <- subset(read.spx, group.mymonth  == 11)
dec <- subset(read.spx, group.mymonth  == 12)
jan.t.test <- t.test(jan$close.diff, conf.level = 0.95,na.rm = TRUE)
jan.t.test$estimate

# Jan Plot 
hist(jan$close.diff,main="Jan Mean - Normal Distribution",xlab="Mean")

# Plot 
ggplot(mean, aes(group.mymonth, mean)) +
  geom_col()+
  theme_classic()+
  scale_x_continuous(breaks = seq(0, 12, by = 1))+
  ggtitle("UNG - Mean Daily Spead Per Month", subtitle = "2007 To Present") +
  labs(x="Month",y="Mean Daily Spread Per Month")+
  theme(plot.title = element_text(hjust=0.5),plot.subtitle =element_text(hjust=0.5))

ggplot(mean, aes(group.mymonth, mean)) +
  geom_line()+
  theme_bw() +
  scale_x_continuous(breaks = seq(0, 12, by = 1))+
  scale_y_continuous(breaks = seq(-0.15, 0.30, by = 0.02))+
  ggtitle("Mean Daily Spead Per Month", subtitle = "1928 To Present") +
  labs(x="Month",y="Mean Daily Spread Per Month")+
  theme(plot.title = element_text(hjust=0.5),plot.subtitle =element_text(hjust=0.5))+
  geom_rect(aes(xmin=4.5,xmax=9,ymin=-Inf,ymax=Inf),alpha=0.1,fill="#CC6666")+
  geom_rect(aes(xmin=1,xmax=4.5,ymin=-Inf,ymax=Inf),alpha=0.1,fill="#66CC99")+
  geom_rect(aes(xmin=9,xmax=12,ymin=-Inf,ymax=Inf),alpha=0.1,fill="#66CC99")

# Write output to file
write.csv(read.spx,file="C:/R Projects/seasonal.csv")


Stock GVP – Mean Reverting Series

Let us explore the ticker symbol GVP. We will test for mean reversion with the Hurst exponent and calculate the half life of mean reversion.

First, lets plot the daily closing prices:

library(ggplot2)
ggplot(new.df, aes(x = Date, y = Close))+
geom_line()+
labs(title = "GVP Close Prices", subtitle = "19950727 to 20170608")+
theme(plot.title = element_text(hjust=0.5),plot.subtitle = element_text(hjust=0.5,size=9), plot.caption = element_text(size=7))

Rplot13

Lets run the Hurst exponent to test for mean reversion, we will do this over the entire history of GVP. For this test we will use a short term lag period of 2:20 days (Explanation Here).

# Hurst Exponent
# Andrew Bannerman
# 8.11.2017

require(lubridate)
require(dplyr)
require(magrittr)
require(zoo)
require(lattice)

# Data path
data.dir <- "D:/R Projects"
output.dir <- "D:/R Projects"
data.read.spx <- paste(data.dir,"GVP.csv",sep="/")

# Read data
read.spx <- read.csv(data.read.spx,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE)

# Convert Values To Numeric
cols <-c(3:8)
read.spx[,cols] %<>% lapply(function(x) as.numeric(as.character(x)))

# Convert Date Column [1]
read.spx$Date <- ymd(read.spx$Date)

# Make new data frame
new.df <- data.frame(read.spx)

# Subset Date Range
#new.df <- subset(new.df, Date >= "2000-01-06" & Date <= "2017-08-06")
#new.df <- subset(new.df, Date >= as.Date("2017-01-07") ) 

#Create lagged variables
lags <- 2:20

# Function for finding differences in lags. Todays Close - 'n' lag period
getLAG.DIFF <- function(lagdays) {
  function(new.df) {
    c(rep(NA, lagdays), diff(new.df$Close, lag = lagdays, differences = 1, arithmetic = TRUE, na.pad = TRUE))
  }
}
# Create a matrix to put the lagged differences in
lag.diff.matrix <- matrix(nrow=nrow(new.df), ncol=0)

# Loop for filling it
for (i in lags) {
  lag.diff.matrix <- cbind(lag.diff.matrix, getLAG.DIFF(i)(new.df))
}

# Rename columns
colnames(lag.diff.matrix) <- sapply(lags, function(n)paste("lagged.diff.n", n, sep=""))

# Bind to existing dataframe
new.df <-  cbind(new.df, lag.diff.matrix)
head(new.df)

# Calculate Variances of 'n period' differences
variance.vec <- apply(new.df[,9:ncol(new.df)], 2, function(x) var(x, na.rm=TRUE))

# Linear regression of log variances vs log lags
log.linear <- lm(formula = log(variance.vec) ~ log(lags))
# Print general linear regression statistics
summary(log.linear)
# Plot log of variance 'n' lags vs log time
xyplot(log(variance.vec) ~ log(lags),
       main="GVP log variance of price diff Vs log time lags",
       xlab = "Time",
       ylab = "Logged Variance 'n' lags",
       grid = TRUE,
       type = c("p","r"),col.line = "red",
       abline=(h = 0)) 

hurst.exponent = coef(log.linear)[2]/2
hurst.exponent

Rplot14

linear.regression.output

If we divide the log(logs) coefficient by 2 we obtain the Hurst exponent of 0.4598435.

Remember H value less than 0.5 = mean reversion.

0.5 = random walk

0.5 = momentum.

Great.

Lets apply a simple linear strategy to see how it performs over this series. We will setup a rolling z-score and we will buy when the zscore crosses below 0 and we will sell when it crosses back over 0. We use a arbitrarily chosen lookback of 10 days for this.

Here are the results:

Rplot109

The above plot is the compounded growth of $1 and since 1995 $1 has grown to over $800 or over 79,900 %.

Next lets calculate the half life of mean reversion. We do this with linear regression. For the independent variable we use the price difference between today’s close and yesterdays close. For the dependent variable we use the price differences between today’s and yesterdays close – the mean of the price difference between today’s close and yesterdays close.

Note we use the previous 100 days of data to produce this test:

# Calculate yt-1 and (yt-1-yt)
y.lag <- c(random.data[2:length(random.data)], 0)   # Set vector to lag -1 day
y.lag  <- y.lag[1:length(y.lag)-1]    # As shifted vector by -1, remove anomalous element at end of vector
random.data <- random.data[1:length(random.data)-1]  # Shift data by -1 to make same length of vector
y.diff <- random.data - y.lag    # Subtract todays close - close from yesterday
y.diff  <- y.diff [1:length(y.diff)-1]   # Adjust length of vector
prev.y.mean <- y.lag - mean(y.lag)  # Subtract yesterdays close from the mean of lagged differences
prev.y.mean <- prev.y.mean [1:length(prev.y.mean )-1]  # Adjust length of vector
final <- merge(y.diff, prev.y.mean)   # Merge
final.df <- as.data.frame(final)  # Create final data frame

# Linear Regression With Intercept
result <- lm(y.diff ~ prev.y.mean, data = final.df)
half_life <- -log(2)/coef(result)[2]
half_life

We obtain a half life of 4.503093 days.

Next lets see if we can set our linear strategy lookback period equal to the half life to see if it improves results. The original look back period was 10 days chosen arbitrarily. The result of a look back of 4.5 rounded to 5 days is below:

Rplot109

From 1995 to roughly present day the result did not improve significantly but looking at the plot we see a large uptick in the equity curve from 2013 onwards. Lets subset our data to only include data post 2013 and lets re-run the 10 day look back and also the 5 day look back to see if we can see the benefit of optimizing using the mean reversion half life.

First the result of the 10 day look back arbitrarily chosen:

Rplot112

We see that $1 has grown to $8 or 700% increase.

Next the look back of 4.5 rounded to 5 days derived from the mean reversion half life calculation:

Rplot109.png
We see that using a look back set to equal the mean reversion half life of 5 days rounded, we see $1 has grown to over $15 or a 1400% increase.

Lets run the Hurst exponent on both periods, the first from 1995 to 2013. The second from 2013 to roughly present day:

1st test: We see H = 0.4601632
2nd: We see H = 0.4230494

Ok so we see the Hurst exponent become more mean reverting post 2013. If we test >= 2016 and >= 2017 we see:
H = 0.3890816 and 0.2759805 respectively.

Next lets choose a random time frame between 1995 and 2013.

From period 2000 to 2003, H = 0.5198083 which is more a random walk.

If we look at period 2003 to 2008 we have a H value of 0.4167166 which is more mean reverting, however, this H value of 0.41 is actually lower than the post 2013 H value of 0.4230494. So the H value in this case didnt say because H is this, then gains should be that.

This might be caused by other factors, frequency of trades, price range, fluctuations etc..

Note this post is largely theoretical no commissions are included in any of the trades. This demonstrates the combination of using statistical tools and performing a back test.

Half life of Mean Reversion – Ornstein-Uhlenbeck Formula for Mean-Reverting Process

Ernie chan proposes a method to calculate the speed of mean reversion. He proposes to adjust the ADF (augmented dickey fuller test, more stringent) formula from discrete time to differential form. This takes shape of the Ornstein-Uhlenbeck Formula for mean reverting process. Ornstein Uhlenbeck Process – Wikipedia

dy(t) = (λy(t − 1) + μ)dt + dε

Where dε is some Gaussian noise. Chan goes on to mention that using the discrete ADF formula below:

Δy(t) = λy(t − 1) + μ + βt + α1Δy(t − 1) + … + αkΔy(t − k) + ∋t

and performing a linear regression of Δy(t) against y(t − 1) provides λ which is then used in the first equation. However, the advantage of writing the formula in differential form is it allows an analytical solution for the expected value of y(t).

E( y(t)) = y0exp(λt) − μ/λ(1 − exp(λt))

Mean reverting series exhibit negative λ. Conversely positive λ means the series doesn’t revert back to the mean.

When λ is negative, the value of price decays exponentially to the value −μ/λ with the half-life of decay equals to −log(2)/λ. See references.

We can perform the regression of yt-1 and (yt-1-yt) with the below R code on the SPY price series. For this test we will use a look back period of 100 days versus the entire price series (1993 inception to present). If we used all of the data, we would be including how long it takes to recover from bear markets. For trading purposes, we wish to use a shorter sample of data in order to produce a more meaningful statistical test.

The procedure:
1. Lag SPY close by -1 day
2. Subtract todays close – yesterdays close
3. Subtract (todays close – yesterdays close) – mean(todays close – yesterdays close)
4. Perform linear regression of (today close – yesterday) ~ (todays close – yesterdays close) – mean(todays close – yesterdays close)
5. On regression output perform -log(2)/λ

# Calculate yt-1 and (yt-1-yt)
y.lag <- c(random.data[2:length(random.data)], 0) # Set vector to lag -1 day
y.lag <- y.lag[1:length(y.lag)-1] # As shifted vector by -1, remove anomalous element at end of vector
random.data <- random.data[1:length(random.data)-1] # Make vector same length as vector y.lag
y.diff <- random.data - y.lag # Subtract todays close from yesterdays close
y.diff <- y.diff [1:length(y.diff)-1] # Make vector same length as vector y.lag
prev.y.mean <- y.lag - mean(y.lag) # Subtract yesterdays close from the mean of lagged differences
prev.y.mean <- prev.y.mean [1:length(prev.y.mean )-1] # Make vector same length as vector y.lag
final.df <- as.data.frame(final) # Create final data frame

# Linear Regression With Intercept
result <- lm(y.diff ~ prev.y.mean, data = final.df)
half_life <- -log(2)/coef(result)[2]
half_life

# Linear Regression With No Intercept
result = lm(y.diff ~ prev.y.mean + 0, data = final.df)
half_life1 = -log(2)/coef(result)[1]
half_life1

# Print general linear regression statistics
summary(result)

regress

regress..

Observing the output of the above regression we see that the slope is negative and is a mean revering process. We see from summary(results) λ is -0.06165 and when we perform -log(2)/λ we obtain a mean reversion half life of 11.24267 days.

11.24267 days is the half life of mean reversion which means we anticipate the series to fully revert to the mean by 2 * the half life or 22.48534 days. However, to trade mean reversion profitably we need not exit directly at the mean each time. Essentially if a trade extended over 22 days we may expect a short term or permanent regime shift. One may insulate against such defeats by setting a ‘time stop’.

The obtained 11.24267 day half life is short enough for a interday trading horizon. If we obtained a longer half life we may be waiting a long time for the series to revert back to the mean. Once we determine that the series is mean reverting we can trade this series profitably with a simple linear model using a look back period equal to the half life. In a previous post we explored a simple linear zscore model: https://flare9xblog.wordpress.com/2017/09/24/simple-linear-strategy-for-sp500/

The lookback period of 11 days was obtained using a ‘brute force approach’ (maybe luck). An optimal look back period of 11 days produced the best result for the SPY.

Post brute forcing, it was noted during optimization of the above strategy that adjusting the half life from 11 days to any number above or below, we experienced a decrease in performance.

We illustrate the effect of moving the look back period shorter and longer than the obtained half life. For simplicity, we will use the total cumulative returns for comparison:

10

11.

12

We see that a look back of 11 days produced the highest cumulative compounded returns.

Ernie Chan goes on to mention that ‘why bother with statistical testing’. The answer lies in the fact that specific trading rules only trigger when their conditions are met and therefore tend to skip over data. Statistical testing includes data that a model may skip over and thus produce results with higher statistical significance.

Furthermore, once we confirm a series is mean reverting we can be assured to find a profitable trading strategy and not per se the strategy that we just back tested.

References
Algorithmic Trading: Winning Strategies and Their Rationale – May 28, 2013, by Ernie Chan

Modelling The Hurst Exponent

One of the purposes of using the Hurst Exponent is to validate whether a price series is momentum, random walk or mean reverting. If we know this type of information, we may ‘fit’ a model to capture the nature of the series.

The Hurst exponent is categorized as:
H <0.5 = mean reverting
H == 0.5 = random walk 
H >0.5 = momentum

Editable parameters:
mu = mean # Change mean value
eta = theta # Try decreasing theta for less mean reversion, increase for more mean reversion
sigma = standard deviation # Change the height of the peaks and valleys with standard deviation

# Create OU simulation
OU.sim <- function(T = 1000, mu = 0.75, eta = 0.3, sigma = 0.05){
  P_0 = mu # Starting price is the mean
  P = rep(P_0,T)
  for(i in 2:T){
    P[i] = P[i-1] + eta * (mu - P[i-1]) + sigma * rnorm(1) * P[i-1]
  }
  return(P)
}

# Plot
plot(OU.sim(), type="l", main="Mean Reversion Sim")

# Save plot to data frame
plot.df <- data.frame(OU.sim())
plot(plot.df$OU.sim.., type="l",main="Mean Reversion Sim")

Rplot05

Looks pretty mean reverting.

We stored the simulation in a data frame so lets run the Hurst exponent to see which H value we obtain.

# Hurst Exponent (varying lags)
require(magrittr)
require(zoo)
require(lattice)

#Create lagged variables
lags <- 2:20

# Function for finding differences in lags. Todays Close - 'n' lag period
getLAG.DIFF <- function(lagdays) {
  function(plot.df) {
    c(rep(NA, lagdays), diff(plot.df$OU.sim.., lag = lagdays, differences = 1, arithmetic = TRUE, na.pad = TRUE))
  }
}
# Create a matrix to put the lagged differences in
lag.diff.matrix <- matrix(nrow=nrow(plot.df), ncol=0)

# Loop for filling it
for (i in lags) {
  lag.diff.matrix <- cbind(lag.diff.matrix, getLAG.DIFF(i)(plot.df))
}

# Rename columns
colnames(lag.diff.matrix) <- sapply(lags, function(n)paste("lagged.diff.n", n, sep=""))

# Bind to existing dataframe
plot.df <-  cbind(plot.df, lag.diff.matrix)

# Calculate Variances of 'n period' differences
variance.vec <- apply(plot.df[,2:ncol(plot.df)], 2, function(x) var(x, na.rm=TRUE))

# Linear regression of log variances vs log lags
log.linear <- lm(formula = log(variance.vec) ~ log(lags))
# Print general linear regression statistics
summary(log.linear)
# Plot log of variance 'n' lags vs log time
xyplot(log(variance.vec) ~ log(lags),
       main="SPY Daily Price Differences Variance vs Time Lags",
       xlab = "Time",
       ylab = "Logged Variance 'n' lags",
       grid = TRUE,
       type = c("p","r"),col.line = "red",
       abline=(h = 0)) 

hurst.exponent = coef(log.linear)[2]/2
hurst.exponent

We obtain a hurst exponent of 0.1368407
which is significantly mean reverting.

Lets change some of the parameters of the simulation to create a moderately mean reverting series. We can alter the theta, if we change eta = 0.3 to eta = 0.04 we obtain this output:

Rplot06

Looks less mean reverting than the first and H = 0.4140561. This is below H 0.50 and is considered mean reverting.

Let us test the SPY from 1993 (inception) to present (9.23.2017) to see what the H value is. The below chart output is a linear regression between the SPY lagged log differences and the log time. The Hurst exponent is the slope / 2 (code included).

Rplot07

The Hurst exponent for the SPY daily bars on time lags 2:20 is 0.4378202. We know that price series display different characteristics over varying time frames. If we simply plot the SPY daily closes:

Rplot10

Observing the  long term trend we see that the series looks more trending or momentum. We already tested a 2:20 day lagged period which is H value of 0.4378202 and if place the lags from 6 months to 1 and a half years (126:378 trading days) we see that H=0.6096454 and is on the momentum side of the scale.

So far – it is as expected.

What does a random series look like?

We can create this using randn from the ramify package. We simply cumsum each random generated data point and add a small positive drift to make it a trending series.

# Plot Random Walk With A Trend
require(ramify)
random.walk = cumsum(randn(10000)+0.025)
plot(random.walk, type="l", main="Random Walk")

# Random Walk Data Frame
random.df <- data.frame(cumsum(randn(10000)+0.03))
colnames(random.df)[1] <- "random"
plot(random.df$random, type="l", main="Random Walk")

Rplot09

The H for this series (lags 2:20) is 0.4999474 which rounded is 0.50 a random walk.

It would seem that based on the statistical tests the Hurst exponent is somewhat accurate in reflecting the nature of the series. It should be noted that different lags produce different regimes. 2:20 lags exhibit stronger mean reversion, on a 6 month to 1 and a half year time period (lags 126:378) the market exhibited stronger momentum H 0.6096454. At lags 50:100 its close to a random walk at H 0.5093078. What does this mean? Not only when optimizing models, we must optimize time frames.

To recap we:

1. Created a mean reverting price series with mu = 0.75, eta = 0.3, sigma = 0.05
2. We saved the output to a data frame and used the hurst calculation (linear regression of log lagged price differences vs log time) over a 2:20 lagged period to obtain the H value. See this post for more information on the hurst exponent calculation: https://flare9xblog.wordpress.com/2017/08/11/hurst-exponent-in-r/
3. The result was significantly mean reverting as we expected.
4. We tested SPY closes 1993 to 9.23.2017. On a lagged period of 2:20 the series was mean reverting and on a 6 month to 1.5 year time period the series was more momentum. This was as expected.
5. We created a random set of numbers and added a small drift to each data point to create a random walk trend. We obtained a H value of 0.5 rounded. Which is as expected.

The parameters for the simulated series can be edited to change the characteristics and the Hurst exponent can be calculated on each output. Try making the series more mean reverting or less mean reverting and the H value should adjust accordingly.

Full R code below:


# Modelling different price series 
# Mean reverison, random and momentum 
# Andrew Bannerman 9.24.2017

# Create OU simulation
# mu = mean
# eta = theta # Try decreasing theta for less mean reversion, increase for more mean reversion
# sigma = standard deviation # Change the height of the peaks and valleys with standard deviation
OU.sim <- function(T = 1000, mu = 0.75, eta = 0.04, sigma = 0.05){
  P_0 = mu # Starting price is the mean
  P = rep(P_0,T)
  for(i in 2:T){
    P[i] = P[i-1] + eta * (mu - P[i-1]) + sigma * rnorm(1) * P[i-1]
  }
  return(P)
}

# Plot
plot(OU.sim(), type="l", main="Mean Reversion Sim")

# Save plot to data frame 
plot.df <- data.frame(OU.sim())
plot(plot.df$OU.sim.., type="l",main="Mean Reversion Sim")

# Hurst Exponent Mean Reversion (varying lags)
require(magrittr)
require(zoo)
require(lattice)

#Create lagged variables
lags <- 2:20

# Function for finding differences in lags. Todays Close - 'n' lag period
getLAG.DIFF <- function(lagdays) {
  function(plot.df) {
    c(rep(NA, lagdays), diff(plot.df$OU.sim.., lag = lagdays, differences = 1, arithmetic = TRUE, na.pad = TRUE))
  }
}
# Create a matrix to put the lagged differences in
lag.diff.matrix <- matrix(nrow=nrow(plot.df), ncol=0)

# Loop for filling it
for (i in 2:20) {
  lag.diff.matrix <- cbind(lag.diff.matrix, getLAG.DIFF(i)(plot.df))
}

# Rename columns
colnames(lag.diff.matrix) <- sapply(2:20, function(n)paste("lagged.diff.n", n, sep=""))

# Bind to existing dataframe
plot.df <-  cbind(plot.df, lag.diff.matrix)

# Calculate Variances of 'n period' differences
variance.vec <- apply(plot.df[,2:ncol(plot.df)], 2, function(x) var(x, na.rm=TRUE))

# Linear regression of log variances vs log lags
log.linear <- lm(formula = log(variance.vec) ~ log(lags))  
# Print general linear regression statistics  
summary(log.linear) 
# Plot log of variance 'n' lags vs log time  
xyplot(log(variance.vec) ~ log(lags),         
       main="SPY Daily Price Differences Variance vs Time Lags",        
       xlab = "Time",        
       ylab = "Logged Variance 'n' lags",       
       grid = TRUE,        
       type = c("p","r"),col.line = "red",        
       abline=(h = 0)) 

hurst.exponent = coef(log.linear)[2]/2
hurst.exponent

# Write output to file write.csv(new.df,file="G:/R Projects/hurst.csv")

  # Plot Random Walk With A Trend
  require(ramify)
  random.walk = cumsum(randn(10000)+0.025)
  plot(random.walk, type="l", main="Random Walk")
  
  # Random Walk Data Frame 
  random.df <- data.frame(cumsum(randn(10000)+0.03))
  colnames(random.df)[1] <- "random"
  plot(random.df$random, type="l", main="Random Walk")

# Hurst Exponent Random Walk (varying lags)
require(magrittr)
require(zoo)
require(lattice)

#Create lagged variables
lags <- 2:20

# Function for finding differences in lags. Todays Close - 'n' lag period
getLAG.DIFF <- function(lagdays) {
  function(random.df) {
    c(rep(NA, lagdays), diff(random.df$random, lag = lagdays, differences = 1, arithmetic = TRUE, na.pad = TRUE))
  }
}
# Create a matrix to put the lagged differences in
lag.diff.matrix <- matrix(nrow=nrow(random.df), ncol=0)

# Loop for filling it
for (i in 2:20) {
  lag.diff.matrix <- cbind(lag.diff.matrix, getLAG.DIFF(i)(random.df))
}

# Rename columns
colnames(lag.diff.matrix) <- sapply(2:20, function(n)paste("lagged.diff.n", n, sep=""))

# Bind to existing dataframe
random.df <-  cbind(random.df, lag.diff.matrix)

# Calculate Variances of 'n period' differences
variance.vec <- apply(random.df[,2:ncol(random.df)], 2, function(x) var(x, na.rm=TRUE))

# Linear regression of log variances vs log lags
log.linear <- lm(formula = log(variance.vec) ~ log(lags))  
# Print general linear regression statistics  
summary(log.linear) 
# Plot log of variance 'n' lags vs log time  
xyplot(log(variance.vec) ~ log(lags),         
       main="SPY Daily Price Differences Variance vs Time Lags",        
       xlab = "Time",        
       ylab = "Logged Variance 'n' lags",       
       grid = TRUE,        
       type = c("p","r"),col.line = "red",        
       abline=(h = 0)) 

hurst.exponent = coef(log.linear)[2]/2
hurst.exponent

References
Algorithmic Trading: Winning Strategies and Their Rationale – May 28, 2013, by Ernie Chan