Non Price Data – UltraPRO ETF Assets Under Management

In previous posts we determined that the S&P500 was mean reverting on a interday basis. What happens if there are better ways to capture the mean reverting nature than simply using price data alone?

In this post we will look at the UltraPRO family of ETF’s and use their assets under management to gauge over all market sentiment. The proceure is as follows:

1. Sum bullish ETF AUM’s
2. Sum Bearish ETF AUM’s
3. Either take the ratio of total bull aum / bear aum or total bull aum – bear aum.

For bullish funds I have used:

“DDM”,”MVV”,”QLD”,”SAA”,”SSO”,”TQQQ”,”UDOW”,”UMDD”,”UPRO”,”URTY”,”UWM”, “BIB”, “FINU”,”LTL”,”ROM”, “RXL”, “SVXY”,”UBIO”,”UCC”,”UGE”,”UPW”,”URE”,”USD”,”UXI”,”UYG”,”UYM”

And bearish:

“DOG”,”DXD”,”MYY”,”MZZ”,”PSQ”,”QID”,”RWM”,”SBB”,”SDD”,”SDOW”,”SDS”,”SH”,”SPXU”,”SQQQ”,”SRTY”,”TWM”,”SMDD”,”UVXY”,”VIXM”,”VIXY”

(Note make sure the order of your bull / bear ETFs are the same as mine if you will try my code. You can do this by seeing the order of the files in your directory by printing file.names on line 37 of the R code, this is important!)

Lets see what this looks like, FYI you can download all the historical NAV’s in .csv format from the UltraPRO website manually or you can use the R code below:

# Download ETF AUM Files
file.list <- c("DDM","MVV","QLD","SAA","SSO","TQQQ","UDOW","UMDD","UPRO","URTY","UWM", "BIB", "FINU","LTL","ROM", "RXL", "SVXY","UBIO","UCC","UGE","UPW","URE","USD","UXI","UYG","UYM","DOG","DXD","MYY","MZZ","PSQ","QID","RWM","SBB","SDD","SDOW","SDS","SH","SPXU","SQQQ","SRTY","TWM","SMDD","UVXY","VIXM","VIXY")

for (i in 1 : length(file.list)) {
  file.name.variable <-  file.list[i]
  url <- paste0("https://accounts.profunds.com/etfdata/ByFund/",
                file.name.variable, "-historical_nav.csv")
  destfile <- paste0("C:/R Projects/Data/etf_aum/",
                     file.name.variable, ".csv")
  download.file(url, destfile, mode="wb")
}

If we sum all assets under management for each group to obtain our bullish and bearish total aums.

total.bull.aum

total.bear.aum.png

spread.bull.bear

Note the spread on a yearly basis is mean reverting. We could run a statistical test but we see that the spread does revert to the mean. However, we can also over lay a 9sma and measure how often the spread moves above and beyond the rolling 9sma. We see what this looks like below:

Rplot158

We see that the spread on a rolling 9sma window is very mean reverting. This will be the subject of our back test. To keep things simple we will simply buy the SPY when the rolling z-score falls below 0 and we will sell when it crosses back over 0.

ultrapro_mean_rev

ultrapro_etf_mean_rev_dd

We see that the maximum draw down is 18%. We avoid most of the 08 decline. As the draw down is so little, we could explore using 2 or 3x leverage to increase profits and match the risk profile of buy and hold for example.

Annualized return is 12% since 2006, this AUM data for the Ultrapro ETF family did not exist prior to 2006. Sharpe ratio is .84.

We total 267 trades, 192 which were profitable and 75 losing trades for a win % of 72%. We are 47% of the time invested from 06 to present day. It means we beat buy and hold and free up capital for ‘layering’ other strategies.

The fact that this didnt break down during the 08 crisis is encouraging. I would rather see robustness against all market regimes.

In closing the Assets under management as a collective is essentially a measure of market sentiment. Asset levels rise and fall in the bull / bear funds according to the view of the market participants.

The strategy above seeks to trade the market when sentiment is lower or more on the bearish side and exit when the sentiment has reverted back to the mean or in this case when the 9 period rolling z score crossed back over 0.

Ideas for other tests:

  1. Change exit criteria, sell at a value higher than 0 for example
  2. Specify a ‘n’ day exit.. for example exit the trade after 3,4 or 5 days
  3. Do not sell first holding day but sell at first immediate profit from entry

These may be better / worse exits than above, but will leave you good readers to test those!

Full R Code below which includes:

  1. Loop for downloading specified ETF .csv from UltraPro website
  2. Loop for merging all .csv into one data frame
  3. Back test script
  4. Plots used above
# UltraPRO ETF Assets Under Management
# Scrape ETF data, study AUM's, create indicators, back test strategy
# Andrew Bannerman 10.4.2017
# Need to subtract bull ETF from bear... or ratio.... zscore ratio.......

require(dplyr)
require(magrittr)
require(TTR)
require(zoo)
require(data.table)
require(xts)
require(PerformanceAnalytics)
require(ggplot2)
require(ggthemes)
require(lubridate)
require(scales)

# Download ETF AUM Files
file.list <- c("DDM","MVV","QLD","SAA","SSO","TQQQ","UDOW","UMDD","UPRO","URTY","UWM", "BIB", "FINU","LTL","ROM", "RXL", "SVXY","UBIO","UCC","UGE","UPW","URE","USD","UXI","UYG","UYM","DOG","DXD","MYY","MZZ","PSQ","QID","RWM","SBB","SDD","SDOW","SDS","SH","SPXU","SQQQ","SRTY","TWM","SMDD","UVXY","VIXM","VIXY")
for (i in 1 : length(file.list)) {
  file.name.variable <-  file.list[i]
  url <- paste0("https://accounts.profunds.com/etfdata/ByFund/",
                file.name.variable, "-historical_nav.csv")
  destfile <- paste0("C:/R Projects/Data/etf_aum/",
                     file.name.variable, ".csv")
  download.file(url, destfile, mode="wb")
}

# Set data Dir
data.dir <- "C:/R Projects/Data/etf_aum/"
download.dir <- "C:/R Projects/Data/etf_aum/"

#Read list of files
files = list.files(path = "C:/R Projects/Data/etf_aum/", pattern = ".", full.names = TRUE)

# Read file names
file.names = list.files(path = "C:/R Projects/Data/etf_aum", pattern = ".csv")

#Read the first file
data.file <- paste(files[1],sep="")

# Read data first file to data frame
df <- read.csv(data.file,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE)

# Convert data formats
cols <-c(4:9)
df[,cols] %<>% lapply(function(x) as.numeric(as.character(x)))
#Convert Date Column [1]
df$Date <- mdy(df$Date)   #mdy for .txt

# Loop for merging all files to one data frame by common date

for (f in 2:length(files)) {
  data.next <- paste(files[f],sep="")
  # if using names.txt
  #  data.next <- paste(data.dir,fileslist[f],'.csv',sep="")
  next.file <- read.csv(data.next,header=TRUE, sep=",",skip=0,stringsAsFactors=FALSE) 

  cols <-c(4:9)
  next.file[,cols] %<>% lapply(function(x) as.numeric(as.character(x)))
  #Convert Date Column [1]
  next.file$Date <- mdy(next.file$Date)   #mdy for .txt

  next.df <- full_join(df, next.file, by = c("Date" = "Date"))

 cnames <- rep(c("Proshares.Name","Ticker","NAV","Prior.NAV","NAV.Change","Nav.Change.a","Shares.Outstanding","AUM"),f)
  cnums <- rep(seq(1,f),each=8)
  columnames <- paste(cnames,cnums)
  colnames(next.df) <- c('Date',columnames)

  df <- next.df
  colnames(df)
}

# Sort data.frame from 'newest to oldest' to 'oldest to newest'
df <- arrange(df, Date)

# Subset AUM columns to one data frame
new.df <- df[, seq(from = 1, to = ncol(df), by=8)]

# Convert all NA to 0
new.df[is.na(new.df)] <- 0

colnames(new.df)

# Sum all AUMs
new.df$aum.sum.bull <-  rowSums(new.df[,c("AUM 1", "AUM 2", "AUM 5", "AUM 6", "AUM 7", "AUM 12", "AUM 13", "AUM 15", "AUM 16", "AUM 26", "AUM 27","AUM 28","AUM 30","AUM 31","AUM 32","AUM 33","AUM 34","AUM 35","AUM 36","AUM 37","AUM 38","AUM 39","AUM 41","AUM 42","AUM 43","AUM 44")])   # Bull Aums
new.df$aum.sum.bear <-  rowSums(new.df[,c("AUM 3", "AUM 4", "AUM 8", "AUM 9", "AUM 10", "AUM 11", "AUM 14", "AUM 17", "AUM 18", "AUM 19", "AUM 20","AUM 21","AUM 22","AUM 23","AUM 24","AUM 25","AUM 29","AUM 40","AUM 45","AUM 46")])   # Bull Aums

# Ratio or spread of Bull to bear
new.df$aum.sum <- new.df$aum.sum.bull / new.df$aum.sum.bear

new.df <- new.df[-c(2855), ]

# Make Plot
ggplot(new.df, aes(Date, aum.sum)) +
  geom_line()+
  theme_economist() +
  scale_x_date(breaks = date_breaks("years"), labels = date_format("%y"))+
  scale_y_continuous(breaks = seq(-19000000000, 19000000000, by = 500500000))+
  ggtitle("Spread of Total Bear Bull Assets Under Management", subtitle = "2006 To Present") +
  labs(x="Year",y="Total Bear AUM")+
  theme(plot.title = element_text(hjust=0.5),plot.subtitle =element_text(hjust=0.5))+
  theme(axis.text.x= element_text(hjust=0.5, angle=0))

# Create rolling n day z-score of total AUM sum
# Use TTR package to create n day SMA
get.SMA <- function(numdays) {
  function(new.df) {
    SMA(new.df$aum.sum, n=numdays)    # Calls TTR package
  }
}
# Create a matrix to put the SMA's in
sma.matrix <- matrix(nrow=nrow(new.df), ncol=0)

# Loop for filling it
for (i in 2:30) {
  sma.matrix <- cbind(sma.matrix, get.SMA(i)(new.df))
}

# Rename columns
colnames(sma.matrix) <- sapply(2:30, function(n)paste("sma.n", n, sep=""))

# Bind to existing dataframe
new.df <-  cbind(new.df, sma.matrix)

# Use TTR package to create n day SD
get.SD <- function(numdays) {
  function(new.df) {
    runSD(new.df$aum.sum, numdays, cumulative = FALSE)     # Calls TTR package to create RSI
  }
}
# Create a matrix to put the SMA's in
sd.matrix <- matrix(nrow=nrow(new.df), ncol=0)

# Loop for filling it
for (i in 2:30) {
  sd.matrix <- cbind(sd.matrix, get.SD(i)(new.df))
}

# Rename columns
colnames(sd.matrix) <- sapply(2:30, function(n)paste("sd.n", n, sep=""))

# Bind to existing dataframe
new.df <-  cbind(new.df, sd.matrix)

# zscore calculation
new.df$zscore.n2 <- apply(new.df[,c('aum.sum','sma.n2', 'sd.n2')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n3 <- apply(new.df[,c('aum.sum','sma.n3', 'sd.n3')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n4 <- apply(new.df[,c('aum.sum','sma.n4', 'sd.n4')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n5 <- apply(new.df[,c('aum.sum','sma.n5', 'sd.n5')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n6 <- apply(new.df[,c('aum.sum','sma.n6', 'sd.n6')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n7 <- apply(new.df[,c('aum.sum','sma.n7', 'sd.n7')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n8 <- apply(new.df[,c('aum.sum','sma.n8', 'sd.n8')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n9 <- apply(new.df[,c('aum.sum','sma.n9', 'sd.n9')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n10 <- apply(new.df[,c('aum.sum','sma.n10', 'sd.n10')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n11 <- apply(new.df[,c('aum.sum','sma.n11', 'sd.n11')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n12 <- apply(new.df[,c('aum.sum','sma.n12', 'sd.n12')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n13 <- apply(new.df[,c('aum.sum','sma.n13', 'sd.n13')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n14 <- apply(new.df[,c('aum.sum','sma.n14', 'sd.n14')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n15 <- apply(new.df[,c('aum.sum','sma.n15', 'sd.n15')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n16 <- apply(new.df[,c('aum.sum','sma.n16', 'sd.n16')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n17 <- apply(new.df[,c('aum.sum','sma.n17', 'sd.n17')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n18 <- apply(new.df[,c('aum.sum','sma.n18', 'sd.n18')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n19 <- apply(new.df[,c('aum.sum','sma.n19', 'sd.n19')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n20 <- apply(new.df[,c('aum.sum','sma.n20', 'sd.n20')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n21 <- apply(new.df[,c('aum.sum','sma.n21', 'sd.n21')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n22 <- apply(new.df[,c('aum.sum','sma.n22', 'sd.n22')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n23 <- apply(new.df[,c('aum.sum','sma.n23', 'sd.n23')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n24 <- apply(new.df[,c('aum.sum','sma.n24', 'sd.n24')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n25 <- apply(new.df[,c('aum.sum','sma.n25', 'sd.n25')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n26 <- apply(new.df[,c('aum.sum','sma.n26', 'sd.n26')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n27 <- apply(new.df[,c('aum.sum','sma.n27', 'sd.n27')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n28 <- apply(new.df[,c('aum.sum','sma.n28', 'sd.n28')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n29 <- apply(new.df[,c('aum.sum','sma.n29', 'sd.n29')], 1, function(x) { (x[1]-x[2])/x[3] } )
new.df$zscore.n30 <- apply(new.df[,c('aum.sum','sma.n30', 'sd.n30')], 1, function(x) { (x[1]-x[2])/x[3] } )

# View Plot
ggplot(new.df, aes(Date, zscore.n9)) +
  geom_line()+
  theme_economist() +
  scale_x_date(breaks = date_breaks("years"), labels = date_format("%y"))+
  scale_y_continuous()+
  ggtitle("Rolling 9 Day Zscore of Total Bull Bear AUM Spread", subtitle = "2006 To Present") +
  labs(x="Year",y="9 Day Zscore")+
  theme(plot.title = element_text(hjust=0.5),plot.subtitle =element_text(hjust=0.5))+
  theme(axis.text.x= element_text(hjust=0.5, angle=0))

# Load S&P500 benchmark data
read.spx <- read.csv("C:/R Projects/Data/SPY.csv", header=TRUE, stringsAsFactors = FALSE)

# Convert data formats
cols <-c(2:7)
read.spx[,cols] %<>% lapply(function(x) as.numeric(as.character(x)))
#Convert Date Column [1]
read.spx$Date <- ymd(read.spx$Date)

# Join to existing data frame
new.df <- full_join(read.spx, new.df, by = c("Date" = "Date"))

# Convert all NA to 0
new.df[is.na(new.df)] <- 0

# Calculate Returns from open to close
new.df$ocret <- apply(new.df[,c('Open', 'Close')], 1, function(x) { (x[2]-x[1])/x[1]} )

# Calculate Close-to-Close returns
new.df$clret <- ROC(new.df$Close, type = c("discrete"))
new.df$clret[1] <- 0

# Subset by date
new.df <- subset(new.df, Date >= as.POSIXct("2006-01-01") ) 

# Name indicators #
#train.indicator <- train.set$close.zscore.n10
#test.indicator <- test.set$close.zscore.n10
signal <- new.df$zscore.n9
trigger <- 0

# Enter buy / sell rules
new.df$enter <- ifelse(signal < trigger, 1,0)
new.df$exit <- ifelse(signal > 0, 1,0)

# Mean Rev
new.df <- new.df %>%
  dplyr::mutate(signal = ifelse(enter == 1, 1,
                                      ifelse(exit == 1, 0, 0)))

# lag signal by one forward day to signal entry next day
new.df$signal <- lag(new.df$signal,1) # Note k=1 implies a move *forward*

new.df[is.na(new.df)] <- 0  # Set NA to 0

# Calculate equity curves

# Signal
new.df <- new.df %>%
  dplyr::mutate(RunID = rleid(signal)) %>%
  group_by(RunID) %>%
  dplyr::mutate(mean.rev.equity = ifelse(signal == 0, 0,
                                         ifelse(row_number() == 1, ocret, clret))) %>%
  ungroup() %>%
  select(-RunID)

# Pull select columns from data frame to make XTS whilst retaining formats
xts1 = xts(new.df$mean.rev.equity, order.by=as.POSIXct(new.df$Date, format="%Y-%m-%d %H:%M"))
xts2 = xts(new.df$clret, order.by=as.POSIXct(new.df$Date, format="%Y-%m-%d %H:%M")) 

# Join XTS together
compare <- cbind(xts1,xts2)

require(PerformanceAnalytics)
colnames(compare) <- c("Mean Reversion UltraPRO ETF's","Buy And Hold")
charts.PerformanceSummary(compare,main="Mean Reversion - UltraPRO ETF AUM", wealth.index=TRUE, colorset=rainbow12equal)
performance.table <- rbind(table.AnnualizedReturns(compare),maxDrawdown(compare), CalmarRatio(compare),table.DownsideRisk(compare))
drawdown.table <- rbind(table.Drawdowns(compare))
#dev.off()
#logRets <- log(cumprod(1+compare))
#chart.TimeSeries(logRets, legend.loc='topleft', colorset=rainbow12equal)

print(performance.table)
print(drawdown.table)

# Find net trade result of multiple 'n' day trades
# Find start day of trade, find end day, perform (last(Close) - first(Open))/first(Open) % calculation
new.df <- new.df %>%
  dplyr::mutate(RunID = data.table::rleid(signal)) %>%
  group_by(RunID) %>%
  dplyr::mutate(perc.output = ifelse(signal == 0, 0,
                                     ifelse(row_number() == n(),
                                            (last(Close) - first(Open))/first(Open), 0))) %>%
  ungroup() %>%
  select(-RunID)

# Win / Loss %
# All Holding Days
winning.trades <- sum(new.df$mean.rev.equity > '0', na.rm=TRUE)
losing.trades <- sum(new.df$mean.rev.equity < '0', na.rm=TRUE)
even.trades <- sum(new.df$mean.rev.equity == '0', na.rm=TRUE)
total.days <- NROW(new.df$mean.rev.equity)

# Multi Day Trades
multi.winning.trades <- sum(new.df$perc.output > '0', na.rm=TRUE)
multi.losing.trades <- sum(new.df$perc.output < '0', na.rm=TRUE)
multi.total.days <- multi.winning.trades+multi.losing.trades

# % Time Invested
time.invested <- (winning.trades + losing.trades) / total.days
winning.trades + losing.trades

# Calcualte win loss %
# All Days
total <- winning.trades + losing.trades
win.percent <- winning.trades / total
loss.percent <- losing.trades / total
# Multi Day Trades
multi.total <- multi.winning.trades + multi.losing.trades
multi.win.percent <- multi.winning.trades / multi.total
multi.loss.percent <- multi.losing.trades / multi.total
# Calculate Consecutive Wins Loss
# All Days
remove.zero <- new.df[-which(new.df$mean.rev.equity == 0 ), ] # removing rows 0 values
consec.wins <- max(rle(sign(remove.zero$mean.rev.equity))[[1]][rle(sign(remove.zero$mean.rev.equity))[[2]] == 1])
consec.loss <- max(rle(sign(remove.zero$mean.rev.equity))[[1]][rle(sign(remove.zero$mean.rev.equity))[[2]] == -1])
consec.wins

# Multi Day Trades
multi.remove.zero <- new.df[-which(new.df$perc.output == 0 ), ] # removing rows 0 values
multi.consec.wins <- max(rle(sign(multi.remove.zero$perc.output))[[1]][rle(sign(multi.remove.zero$perc.output))[[2]] == 1])
multi.consec.loss <-max(rle(sign(multi.remove.zero$perc.output))[[1]][rle(sign(multi.remove.zero$perc.output))[[2]] == -1])

# Calculate Summary Statistics
# All Days
average.trade <- mean(new.df$mean.rev.equity)
average.win <- mean(new.df$mean.rev.equity[new.df$mean.rev.equity >0])
average.loss <- mean(new.df$mean.rev.equity[new.df$mean.rev.equity <0])
median.win <- median(new.df$mean.rev.equity[new.df$mean.rev.equity >0])
median.loss <- median(new.df$mean.rev.equity[new.df$mean.rev.equity <0])
max.gain <- max(new.df$mean.rev.equity)
max.loss <- min(new.df$mean.rev.equity)
win.loss.ratio <- winning.trades / abs(losing.trades)
summary <- cbind(winning.trades,losing.trades,even.trades,total.days,win.percent,loss.percent,win.loss.ratio,time.invested,average.trade,average.win,average.loss,median.win,median.loss,consec.wins,consec.loss,max.gain,max.loss)
summary <- as.data.frame(summary)
colnames(summary) <- c("Winning Trades","Losing Trades","Even Trades","Total Trades","Win %","Loss %","Win Loss Ratio","Time Invested","Average Trade","Average Win","Average Loss","Median Gain","Median Loss","Consec Wins","Consec Loss","Maximum Win","Maximum Loss")
print(summary)

# Multi Day Trades
multi.average.trade <- mean(new.df$perc.output)
multi.average.win <- mean(new.df$perc.output[new.df$perc.output >0])
multi.average.loss <- mean(new.df$perc.output[new.df$perc.output <0])
multi.median.win <- median(new.df$perc.output[new.df$perc.output >0])
multi.median.loss <- median(new.df$perc.output[new.df$perc.output <0])
multi.win.loss.ratio <- multi.average.win / abs(multi.average.loss)
multi.max.gain <- max(new.df$perc.output)
multi.max.loss <- min(new.df$perc.output)
multi.summary <- cbind(multi.winning.trades,multi.losing.trades,multi.total.days,multi.win.percent,multi.loss.percent,multi.win.loss.ratio,time.invested,multi.average.trade,multi.average.win,multi.average.loss,multi.median.win,multi.median.loss,multi.consec.wins,multi.consec.loss,multi.max.gain,multi.max.loss)
multi.summary <- as.data.frame(multi.summary)
colnames(multi.summary) <- c("Winning Trades","Losing Trades","Total Trades","Win %","Loss %","Win Loss Ratio","Time Invested","Average Trade","Average Win","Average Loss","Median Gain","Median Loss","Consec Wins","Consec Loss","Maximum Win","Maximum Loss")
print(multi.summary)
print(performance.table)
print(drawdown.table)
table.Drawdowns(xts1, top=10)
Return.cumulative(xts1, geometric = TRUE)

# Write output to file
write.csv(new.df,file="C:/R Projects/ultra.pro.etf.mean.rev.csv")

Author: Andrew Bannerman

Integrity Inspector. Quantitative Analysis is a favorite past time.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s