The conversations with people at the local bar last night were the usual local and national politics. I usually join the pool table after a casual drink. Yesterday, upon the request of an old friend, I toyed with the game of darts.

In my heyday, I used to be good at this, so without much trouble, I could hit the bullseye after getting my sight.

Maybe it is the draught that did the trick. I felt jubilant.

A couple of tequilas and a few minutes later, I hoped to beat my previous record.

“Did you change the darts?”, “There’s something in my eye,” “Maybe they moved the target.” These were circling in my mind after my performance.

They did not move the target, they did not change the darts, but there was indeed something in my head. I decided to stop playing the next round and go home, lest someone else becomes the bullseye.

While heading back, I thought about how far off I was from the bullseye (origin). On the X-Y coordinate plane, the distance from origin is .

The squared distance is .

It varies with each try. Random variable.

Today morning, while catching up on my news feed, I noticed this interesting article.

The article starts with a conversation between fans.

People bet millions on heads vs. tails.

“…the toss is totally biased.” It reminded me of the famous article by Karl Pearson, “Science and Monte Carlo” in the fortnightly review published in 1894. He experimentally proved that the roulette wheels in Monte Carlo were biased.

He computed the error (observed – theoretical) (or squared error) and showed that the roulette outcomes in Monte Carlo could not happen by chance. He explained this based on two sets of 16,500 recorded throws of the ball in a roulette wheel during an eight week period in summer of 1892, manually, I may add. Imagine counting up 33,000 numbers, and performing various sets of hand calculations.

He also writes that he spent his vacation tossing a shilling 25,000 times for a coin-toss bias experiment.

These experiments led to the development of the relative squared error metric, the Pearson’s cumulative test statistic (Chi-square test statistic).

Some excerpts from his concluding paragraph.

“To sum up, then: Monte Carlo roulette, if judged by returns which are published apparently with the sanction of the Societe, is, if the laws of chance rule, from the standpoint of exact science the most prodigious miracle of the nineteenth century.

…

we are forced to accept as alternative that the random spinning of a roulette manufactured and daily readjusted with extraordinary care is not obedient to the laws of chance, but is chaotic in its manifestation!

By now you might be thinking, “What’s the point here. Maybe it’s his hangover writing.”

Hmm, the square talk is for the Chi-square distribution.

Last week, we visited Mumble’s office. He has transformed the data — log-normal distribution.

If follows a normal distribution, then, is a log-normal distribution because log of X is normal.

Exponentiating a normal distribution will result in log-normal distribution.

See for yourself, how tranforms into X, a log-normal distribution. X is non-negative.

Similarly, squaring a normal distribution will result in a Chi-square distribution.

If follows a normal distribution, then, is a Chi-square distribution with one degree of freedom.

See how the same tranforms into , a Chi-square distribution, again non-negative, since we are squaring.

Let’s derive the probability distribution function for the Chi-square distribution using the fact that . We will assume Z is a standard normal distribution .

For this, we will start with the cumulative distribution function, and take its derivative to obtain the probability distribution function since .

is the cumulative distribution function.

Applying the fundamental theorem of calculus and chain rule together, we get,

By now, you are familiar with the probability distribution function for Z. . Let’s use this.

for .

As you can see, the function is only defined for . It is 0 otherwise.

With some careful observation, you can tell that this function is the Gamma density function with and .

Yes, I know, it is not very obvious. Let me rearrange it for you, and you will see the pattern.

Multiply and divide by 1/2.

If

Drawing from the factorial concepts, we can replace with (1/2)! or (-1/2)!, which means, r = 1/2

This equation, as you know is the density function for the Gamma distribution.

So, Chi-square density function is Gamma density function with and . It is a special case of the Gamma distribution.

Now let’s up a level. Sum of squares of two standard normals, like our squared distance ( ).

.

We know from lesson 46 on convolution that if X and Y are two independent random variables with probability density functions and , their sum is a random variable with a probability density function that is the convolution of and .

We can use the principles of convolution to derive the probability density function for .

Let’s assume . Then, , and

. This is the same function we derived above for . Using this,

The term with the integral integrates to , and its definite integral is since . Try it for yourself. Put your calculus classes to practice.

We are left with

, again for .

This function is a Gamma distribution with and .

Generalization for n random normal variables

If there are n standard normal random variables, , their sum of squares is a Chi-square distribution with n degrees of freedom.

Its probability density function is a Gamma density function with and . You can derive it by induction.

for and 0 otherwise.

Look at this animation for Chi-square distribution with different degrees of freedom. See how it becomes more symmetric for large values of n (degrees of freedom).

We will revisit the Chi-square distribution when we learn hypotheses testing. Till then, the sum of squares and error square should remind you of Chi-square [Square – Square], and tequila square should remind you of

If you find this useful, please like, share and subscribe.

You can also follow me on Twitter @realDevineni for updates on new lessons.