We all know that the Maximum Likelihood Estimator (MLE) is justified primarily on the basis of its desirable (large sample) asymptotic properties. Specifically, under the usual regularity conditions, the MLE is generally weakly consistent, asymptotically efficient, and its limit distribution is Normal. There are some important exceptions to this
, but by and large that's what you get.
When it comes to finite-sample properties, the MLE may be unbiased or biased; efficient or inefficient; depending on the context. It can be a "mix and match" situation, even in the context of one problem. For instance, for the standard linear multiple regression model with Normal errors and non-random regressors, the MLE for the coefficient vector is unbiased, while that for the variance of the error term is biased.
As we often use the MLE with relatively small samples, evaluating (and compensating for) any bias is of some interest.