In parametric estimation we assume that we know the type of distribution
(e.g, Normal, Poisson, Bernoulli, etc.) from which our random sample is drawn,
and on the basis of that random sample we must infer the values of the parameters
of the distribution. For example, we take a random sample from a Poisson
distribution and on the basis of the random sample, we decide what the
value of l is. In addition, we use the random sample to make statements
about how confident we are in our guess about the values of the parameters.
An Estimator is a formula, or a rule, that
we use to get values for the parameters. For example, we have an urn with
a large number of balls in it. There are only two colors of balls -- green and red -- in
the urn. We draw 10 balls with replacement and note their color.
Clearly, the best guess about the proportion of green balls in the urn is
the number of green balls drawn divided by 10. However, note that this is
just the sample mean.
Technically, an estimator is a real
valued funtion of the sample.
Maximum Likelihood Method of Obtaining Estimators
We need a systematic way of getting estimators. The method of maximum
likelihood is a very powerful method of doing so and is strongly intuitive.
It is not fool-proof, but for almost all important distributions of
interest, it provides us with plausible and useful estimators of the
underlying parameters.
The Maximum Likelihood Method has three steps. With respect to
the Bernoulli distribution these are: First: Form the Joint distribution of the sample (the Likelihood function
of the sample)