Modelling of financial markets is usually undertaken using stochastic processes. Stochastic processes are collection of random variables indexed, for our purposes, by time. Examples of stochastic processes used in finance include GBM, OU, Heston Model and Jump Diffusion processes. For a more mathematically detailed explanation of stochastic processes, diffusion and jump diffusion models, read this article. To get an intuitive feeling of how these different stochastic processes behave, visit the interactive web application that I worked on in conjunction with Turing Finance and Southern Ark.

As was witnessed during the recent financial crisis, stock markets exhibit jumps. That is, they exhibit large falls in value. Over the past couple of decades, the has been increasing interest in the modelling of these jumps. The classic model for modelling jumps in stochastic processes is the Merton Jump diffusion model. This model says that the returns from an asset are driven by “normal” price vibrations (representing the continuous diffusion component) and “abnormal” price vibrations (representing the discontinuous jump component). The SDE of the Merton Jump diffusion model is given as:

where is a Poisson process with rate and has a log normal distribution.

In practice, before we can proceed with fitting a jump diffusion model to data, we first have to establish if the data that we are fitting the model to has jumps. This requires us to statistically test for the presence of jumps in return data.

There are various tests that have been developed for testing for jumps in return data. Examples of such tests include the bi-power variations test of Barndorff-Nielsen and Shepard (2006). This jump test compares the estimate of variance that is not robust to the presence of jumps, called realized variance, with an estimate of variance that is robust to the presence of jumps, called bi-power variation. This test was improved by Ait Sahalia and Jacod (2009). In their test, they compare the bi-power variations for returns sampled at different frequencies. Lee and Mykland (2008) also used insights from the test of Barndorff-Nielsen and Shepard (2006) by testing for the presence of jumps at each observed value of the process, while taking into account the volatility of the process at the time the observation was made. The test of Lee and Mykland (2008) has the added advantage that it not only indicates whether or not jumps have occurred, but also gives information as to what time the jumps occurred and their size.

In this blog post, I propose a test for the presence of jumps using Neural Networks. This test is then assessed using simulation compared to the Lee and Mykland (2008) test, then we look at how the Neural Network test fares on stocks on the JSE.

The Neural Network Test

Neural Networks are a group of learning models which fall under machine learning. They were inspired by the biological neural networks. For a detailed analysis of neural networks and the algorithm used to train neural networks, please refer to this article by Turing Finance.

As mentioned above, the test I am proposing uses neural networks to test for jumps. This test establishes whether or not the whole series of returns has jumps. That is, the test has a binary outcome. This means that we can treat the testing for the presence of jumps as a classification problem. We want to classify a set of returns as belonging to on one of two categories, having jumps or not having jumps.

Given that neural networks can perform well in classification problems, such as in credit rating, it seems natural to try see how neural networks perform when trained to distinguish between a set of returns that has jumps and one that does not have jumps.

Architecture of Neural Network

As the test uses neural networks, we need to carefully think about the architecture of the neural network. That is, we need to think of: what the inputs to the network are, what number of hidden layers (and associated number of neurons) we should have, and what the output layer should look like.

I have chosen the inputs into the neural network are: The first and second centered moments, skewness, kurtosis, the fifth, sixth, seventh and eighth centered moments. All of the moments used are sample moments. These particular variables were chosen as inputs to the neural network as the tests of Barndorff-Nielsen and Shepard (2006), Ait Sahalia and Jacod (2009) and Lee and Mykland (2008) use versions of these moments as their test statistics. So we believe that these moments should have strong predictive power. However, it should be noted that the moments are not necessarily independent and this could affect the performance of the neural network. Thus the inputs into the neural network still need further work. Let be a series of log returns. The moments inputs would then be given as:

A single hidden layer with 10 neurons was chosen. This is mainly because we believe that the relationship between the inputs and the outputs is definitely non-linear, so at least one hidden layer was required, but we also wanted to keep the run-times within reason, so we chose only 10 neuron of this hidden layer.

Since we only want to classify a set of returns as having jumps or not, the output layer only has one neuron. This neuron can only take on the values 1 (if there is a jump) and 0 (if there is no jump).

Figure 1 gives us an example of the architecture of the neural network used in this blog post.

It is important note that this particular architecture was chosen just for illustrating how one would think about testing for jumps using neural networks. It is by no means necessarily the “best’ architecture. This is definitely an area for future work. We hope to cover this in later posts.

Having decided on the architecture of the neural network, we still needed to train it. The neural network was trained on 3000 observations from a processes that has jumps (generated using the Merton Jump model) and a process which does not have jumps (generated using GBM). The neural network was trained using the neuralnet package in R.

Simulation study

Simulations were undertaken to assess how the neural network test performs against the Lee & Mykland Test (2008). The underlying model being assumed is the basic Merton model discussed above. Using simulations, we worked out the Probability of ACTUAL detection (the test being able to detect jumps in a series that has jumps) and the probability of FALSE detection (the test incorrectly detecting jumps in a series of returns that doesn't have jumps) of each of the tests. The simulation was conducted at a daily frequency, using different combinations of the parameters. A more rigorous comparison would have to compare the two tests at different frequencies, and for large and small jumps.

We have summarized the results of the simulations conducted in the table below:

Test Probability of ACTUAL detection Probability of FALSE detection Neural Network Test 0.994 0.021 Lee & Mykland Test 0.967 0.158

Based on the simulation results in the table above, the neural network test to perform better than the Lee & Mykland (2008) test. This is because the probability of actual detection for the neural network test is higher than for the Lee & Mykland (2008) test, and the probability of false detection is lower than that of the Lee & Mykland (2008) test.

Given that we have seen how the test performs on simulated data, we are now in a position to apply the test on data from the Johannesburg Stock Exchange.

Applying the Test to JSE Data

After seeing how the neural network test for jumps performs in simulations, we applied the test to 217 stocks which are listed on the Johannesburg Stock Exchange (JSE). The various stocks used in this post, categorized by industry, are shown in the table below.

The result of the implementation of the Neural Network test on JSE stocks is presented in the table below. Note that only the stocks in each sector that had jumps are shown.

A summary of the results of the jump test per sector are is presented in the table below:

Basic Materials Indi Consumer Goods Health Care Consumer Services Fini Tech % 34% 54% 53% 14% 37% 20% 70%

The table suggests that the most "jumpy" sectors are the Technology, Consumer goods and Industrial sectors. A deeper analysis into each of these sectors should reveal some interesting conclusions. It should be noted that for some sectors, e.g. Oil and Gas, not enough data was collected. The results for such sectors should be treated with caution.

Conclusion:

In this blog post, we looked at a possible test for the presence of jumps using neural networks. The neural network test performs well under simulations, but the architecture used in this blog post is not necessarily the most optimal. A more rigorous study of the different architectures would be useful. When the neural network test was applied to data on the JSE, it was found that the Technology sector is the most jumpy with 70% of stocks in the sector having jumps. However, more data for each sector on the JSE would be necessary to have more confidence in this result.

Code Used

In this section we provide the R-Code that was used in this blog post.