J: The maximum likelihood estimation method provides the formulation for computing the parameters of the population distribution which agree most closely with the sample observations. For a given sample , the optimal parameter maximizes the likelihood (joint probability of occurrence) of observing the sample. Last week, we derived the parameters for the Binomial and the Poisson, both discrete distributions. I wonder if we can use the same logic for continuous distributions?

D: Hello Joe. You look recharged and ready to go. So let’s cut to the chase. Yes, we can apply the same joint probability logic to the continuous probability functions. Let’s take a random sample from a population with a probability density function as in this figure.

Assume we can divide the space into equal intervals of length . Can you tell me the probability that a given sample data point falls in the range ?

J: That would be the integral of over the range.

D: Correct. For an infinitesimally small range , we can approximate this area as . Since the sample observations are independent and identically distributed, the joint probability of observing them, or the likelihood function is

Maximizing the above function is equivalent to maximizing since we have equal intervals of length .

J: I see. So, it reduced to the same likelihood we did last week; the product of the probability density function for different values of the sample. We can choose the value of the parameter(s) that maximizes this quantity.

D: Precisely. Here are Fisher’s own words from his paper.

J: Shall we try some continuous distributions today?

D: We shall. Begin with Exponential Distribution.

J: Let me solve it. The probability function for the exponential distribution is . So the likelihood function is

The log-likelihood will be

is the parameter we are trying to estimate for maximum likelihood. So take the derivate of the function with respect to and equate it to 0 to solve for .

The maximum likelihood estimator for the exponential distribution is .

D: Excellent. Another continuous function you can try is the Rayleigh density. It is the probability distribution of the distance from the origin to a point with coordinates (x , y) in an x-y plane. Both X and Y should have a normal distribution.

is the parameter.

Do you recognize this function? We learned a probability distribution which is similar to this.

J: Aaah… Chi-square? Sum of squares of normals.

D: Correct. Rayleigh is the distribution of the distance. The square root of the Chi-square distribution.

J: Let me go through the steps of estimating using the maximum likelihood method.

We have one parameter to estimate, so .

The maximum likelihood estimator for the Rayleigh distribution is

D: Very well done. Now let’s flex our muscles. How can we estimate two or more parameters?

J: Like for example, the parameters of the normal distribution, and ?

D: Yes.

J:

D: Simple extension. Since there will be two parameters in the likelihood function of a normal distribution, we have to solve the set of equations derived through the partial derivates. Look at these steps for the normal distribution.

Now we can have two equations based on the partial derivatives of the log-likelihood function.

Let’s take the first partial derivative.

Now, take the second partial derivative.

The maximum likelihood estimators for the Normal distribution are and

J: That makes it very clear. There is some work involved in solving, but the procedure is straightforward.

D: You can try estimating the parameters of the lognormal distribution for practice.

J: Well, lognormal has a very similar probability density function. Let me try Gumbel. It has a double exponential distribution. Will be fun to solve.

D: Okay. Go for it.

J: The cumulative function for the Gumbel distribution is . and are the parameters.

Taking the derivative of this function, we can get the probability density function for the Gumbel distribution as

The likelihood function is

We can get the partial derivatives with respect to and and solve the system of equations.

D: Yes, I know. There is no easy solution for this system. You have to use iterative numerical techniques. Do you remember Newton-Raphson method?

J:

D: It’s okay if you do not remember. We will let R do it for us. The moral is that MLE method is not a panacea. We also get into issues of discontinuity for likelihood functions sometimes. Think uniform distribution for example. As you know, we cannot use calculus methods for discontinuous functions. A combination of the MLE method with numerical techniques and approximations is a good approach.

J: I can see how things are building up. Are there any such pitfalls I need to pay attention to?

D: Yes, there are some properties, limitations and real-world applications we can learn. Do you want to do it now?

J: Let’s do it next week. I am afraid these equations are going to be my sweet dreams tonight

To be continued…

If you find this useful, please like, share and subscribe.

You can also follow me on Twitter @realDevineni for updates on new lessons.