# equivariant and invariant estimators

When estimating the parameter of a location family, one can take the sample mean or the sample median. Or, more generally, any equivariant estimator. In a normal location family, the sample mean is efficient. In a double-exponential family, the median is better. (Solve for the MLE in each case. Whenever the MLE exists, it is asymptotically efficient)

Now, note that these two families are disjoint subsets of the exponential power family. When p=1, you're in a double-exponential family (Laplace); when p=2 you're in a normal family.

Michael Sherman - Comparing the Sample Mean and the Sample Median: An Exploration in the Exponential Power Family shows that when p=1.407, the mean and the median are equally good.

Note that the mean and median are examples of linear combinations of the order statistics. We can imagine how different linear combinations of the order statistics would be optimal for different values of p.

---

This is what I saw in class:

Definition: an equivariant estimator T(X) is one that satisfies T(X + ε 1) = T(X) + ε. i.e. if you shift all the data by some amount ε, the estimator changes by ε. Examples: sample mean, sample median, sample max, sample min.

Definition: an invariant estimator T(X) is one that satisfies T(X + ε 1) = T(X). i.e. if you shift all the data by the same amount, the estimator does not change. Examples: sample variance, interquartile range.

Theorem: if T1 is equivariant and T2 is invariant, then T1 + T2 is equivariant.

Definition: the maximum invariant Y(X) is the (n-1)-dimensional vector (X2 - X1, X3 - X1, ..., Xn - X1).

Theorem: every invariant estimator is a function of the maximum invariant.

---

This is how I made sense of it all.

Claim 1: Equivariant estimators are precisely the linear combinations of the order statistics $\sum {a}_{i}{X}_{\left(i\right)}$ satisfying ${a}_{1}+...+{a}_{n}=1$.
Claim 2: Invariant estimators are precisely the linear combinations of the order statistics satisfying ${a}_{1}+...+{a}_{n}=0$.
Proof:
Since the ${X}_{i}$ are iid, the order statistics ${X}_{\left(i\right)}$are sufficient.
Non-linear functions of the order statistics cannot be equivariant or invariant. (not proven)
Thus equivariant and invariant statistics must have the form $T\left(X\right)=\sum {a}_{i}{X}_{\left(i\right)}$.
Now consider that for invariant estimators, $T\left(X+\epsilon 1\right)=T\left(X\right)+\epsilon$.
But $T\left(X+\epsilon 1\right)=\sum {a}_{i}{\left(X}_{\left(i\right)}+\epsilon \right)=\sum {a}_{i}{X}_{\left(i\right)}+{a}_{i}\epsilon =T\left(X\right)+\epsilon \sum {a}_{i}$.
For equivariant estimator, this must be equal to $T\left(X\right)+\epsilon$, so it follows that $\sum {a}_{i}=1$.
For invariant estimators, this must be equal to $T\left(X\right)$, so it follows that $\sum {a}_{i}=0.$
QED.
---

For the record, I am beginning to use LyX. It is nice, but when I adjust the displayed font size, the math stays the same size, and so it looks comparatively tiny. This post was made by exporting to XHTML, copy-pasting onto DreamWidth, and then deleting the silly "magicparlabel" A-tags that were making everything look green. mirror of this post
• Post a new comment

#### Error

Anonymous comments are disabled in this journal

default userpic