Chiara Longo, PhD

Modeling and forecasting volatility of the stock market has been the focus of extensive empirical and theoretical investigation for both academics and practitioners. The motivation for this line of research is clear: volatility is one of the most critical issues in the world of finance. It is measured in terms of standard deviation and is a primary quantitative indicator of risk.

The need to understand the determinants of market volatility and to forecast it has led to the constant development of models that overcome the weaknesses and disadvantages of earlier versions.

In order to better understand where we are going, let’s have a look at where we’ve come from and the progress in this area of research to date.

Evolution of Volatility Models

The most widely used approach to modeling volatility has been the simple calculation of historical variance. Historical volatility implies the calculation of the variance (or volatility) on historical returns over a certain period of time and assumes that value as a forecast for the future. Because of the simplicity of its calculation, historical volatility has traditionally been used as an input to financial instrument pricing models. Despite the evidence suggesting the use of more sophisticated and accurate models, historical volatility is still looked at as a benchmark.

To account for the time varying behavior of volatility other approaches have been devised. These include the moving average (over a time window whose width is chosen depending on the purpose of the analysis) and exponential weighted moving average, which assigns heavier weights to the most recent information.

To illustrate these models, here is a comparison between the actual volatility of returns for Bitcoin (BTC) and volatility estimated following these approaches.

In the chart it’s easy to see that the historical volatility model is not the most appropriate choice, especially when we deal with highly dynamic variables as crypto assets. The two moving averages, both calculated on a 30-day rolling window, obviously catch the dynamics of the actual volatility, but the Exponentially weighted[1] one (EWMA) seems to show more accuracy as we attributed more importance to the most recent information.

[1]The value of the parameter l used to calculate the weights for the EWMA has been set to 0.94 as it is approximately the weight used by risk metrics.

The Current State of Modeling Art

Now let’s have a look at another chart.

Here, although the black line and the red line seem to follow a similar path, but the red line better catches the spikes (or bursts) in the volatility of returns; that red line represents the volatility estimated using a GARCH model (introduced by Engel and Bollerslev in the late ‘80s).

Despite the intimidating name, a GARCH (Generalized Auto Regressive Conditional Heteroschedastic) model is quite easy to understand, as it simply considers the variance of our returns as function of its own history as well as the lags of the squared errors.

Besides, what you see here is the vanilla version of GARCH. Since the appearance of this new class of models, the desire to understand and model those behaviors that are distinctive features of returns volatility (leptokurtosis, volatility clusters, leverage effects — see previous post for a brief explanation) has pushed forward this research and led to quite a large variety of GARCH specifications able to address those issues (Nelson’s Exponential GARCH and GJR-GARCH account, for example, for asymmetric reaction of volatility to positive and negative shocks) and to exploit this precious source of information to improve forecast accuracy.

At this point, one might wonder, given the similar behavior of EWMA and GARCH volatility on our data, why we should go with latter, which might be a bit trickier to estimate, when apparently we can obtain good enough results following the more simple method. Well, let’s think about that for a second. When I calculated the EWMA, I chose two parameters: the window width and the decay parameter to get the weights. I could touch this and that just to give you something you like. I COULD CHOOSE.

Conversely, the output of a GARCH comes directly from the data: no parameters purposely chosen; everything has been deduced from the investigation of the data.

It goes without saying that, in a game where one can win big but can lose even bigger, in such a dynamic environment like the crypto world, it’s a good habit to rely on facts instead of mere opinions; and only data can deliver facts.

And the Winner Is?

So, in reviewing candidates for modeling volatility, we have concluded that GARCH is the most promising model. It may not become a household word, but Generalized Auto Regressive Conditional Heteroschedastic modeling can be a tool of great value for cryptocurrency investors. Especially since this asset class is likely to be one characterized by high volatility for some time to come.

About the Author

Dr. Chiara Longo is Chief Economist at Pareto Network where she leads the Predictive Analytics and Economic Research department. Dr. Longo’s experience in econometric modeling of commodities and currencies includes work at Bank of the West/BNP Paribas, KBB, and ENI Spa (Italian national energy group). She holds a PhD in Economics from University degli Studi di Milano and a Masters in Econometrics and Economic Theory from University de Toulouse.

About Pareto Network

The Pareto Network is a peer to peer financial content network. pareto.network