Now that we already have seen a first use case of function readYahooFinance , we now want to try the capabilities of julia and the EconDatasets package with a more challenging task. Hence, we want to download adjusted stock prices for all constituents of the SP500 in a fully automated way. Therefore, we first need to get a list of the ticker symbols of all constituents, which we can get from the S&P homepage. However, this list is stored as an Excel sheet with .xls extension, and we need to read in this binary file with package Taro .

To make Taro work, you first need to make sure that it is able to find Java on your system. If your path deviates from the default settings, just make sure to set the respective JAVA_LIB environment variable in your .bashrc file. In my case, the variable is set as follows:

# 64-bit machine # JAVA_LIB="/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server/" # 32-bit machine JAVA_LIB="/usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/server/" export JAVA_LIB

We can now install and load package Taro :

Pkg.add("Taro")

using Taro Taro.init()

Found libjvm @ /usr/lib/jvm/java-7-openjdk-i386/jre/lib/i386/server/

If something with your Taro configuration is not correct, you will get an error at this step. In this case, you could simply download and export the Excel sheet to .csv manually, which you then can read in with function readtable from the DataFrames package.

Otherwise, you can use Taro to download and read in the respective part of the Excel sheet:

url = "http://us.spindices.com/idsexport/file.xls?hostIdentifier=48190c8c-42c4-46af-8d1a-0cd5db894797&selectedModule=Constituents&selectedSubModule=ConstituentsFullList&indexId=340" filepath = download(url) constituents = Taro.readxl(filepath, "Constituents", "A10:B511") head(constituents)

Constituent Symbol 3M Co MMM Abbott Laboratories ABT AbbVie Inc. ABBV Accenture plc ACN ACE Limited ACE Actavis plc ACT

We now should have name and ticker symbol of each SP500 constituent stored as a DataFrame . In my case, however, there even is one ticker symbol too much, although I do not know why:

(nTicker, nVars) = size(constituents)

501 2

An inconsistency that I will not further invest at this point. In addition, however, some of the ticker symbols are automatically read in as boolean values, and we will have to convert them to strings first. Let’s display all constituents with boolean values:

isBoolTicker = [isa(tickerSymbol, Bool) for tickerSymbol in constituents[:Symbol]] constituents[find(isBoolTicker), :]

Constituent Symbol AT&T Inc true Ford Motor Co false

The reason for this is that the respective ticker symbols are “T” and “F”, which will be interpreted as boolean values. Once we did correct for this mistake, we transform the array of ticker symbols into an Array of type ASCIIString .

indTrue = find(constituents[2] .== true) indFalse = find(constituents[2] .== false) constituents[indTrue, 2] = "T" constituents[indFalse, 2] = "F" tickerSymb = ASCIIString[constituents[:Symbol]...] tickerSymb[1:5]

MMM ABT ABBV ACN ACE

Now that we already have a list of all ticker symbols, in principle we could apply the same procedure as before: download each stock, extract the adjusted closing prices, and join all individual price series. However, as we have 500 stocks, this procedure would already take approximately 15 minutes if each individual stock took only 2 seconds. Hence, we strive for a much faster result using julia’s parallel computing capabilities, and this is already implemented as function readYahooAdjClose .

Under the hood, readYahooAdjClose uses a map-reduce structure. As the map step, for any given ticker symbol we download the data, extract the adjusted closing prices and rename the column to its ticker symbol. As reduce step we need to specify some operation that combines the individual results of the map step – in our case, this is function joinSortedIdx_outer .

Let’s now set the stage for parallel computation, add three additional processes and load the required packages on each process.

addprocs(3) @everywhere using Dates @everywhere using DataFrames @everywhere using TimeData @everywhere using EconDatasets

3-element Array{Any,1}: 2 3 4

To run the parallelized code, simply call function readYahooAdjClose :

dates = Date(1960,1,1):Date(2014,7,20) ## measure time t0 = time() @time vals = readYahooAdjClose(dates, tickerSymb, :d) t1 = time() elapsedTime = t1-t0 mins, secs = divrem(elapsedTime, 60)

Downloading of all 500 stocks did only take:

println("elapsed time: ", int(mins), " minutes, ", ceil(secs), " seconds")

elapsed time: 3 minutes, 45.0 seconds

Now we convert the data of type Timedata to type Timenum and store the result in the EconDatasets data directory: