5
Feb 22

## Sufficiency and minimal sufficiency

### Sufficient statistic

I find that in the notation of a statistic it is better to reflect the dependence on the argument. So I write $T\left( X\right)$ for a statistic, where $X$ is a sample, instead of a faceless $U$ or $V.$

Definition 1. The statistic $T\left( X\right)$ is called sufficient for the parameter $\theta$ if the distribution of $X$ conditional on $T\left( X\right)$ does not depend on $\theta .$

The main results on sufficiency and minimal sufficiency become transparent if we look at them from the point of view of Maximum Likelihood (ML) estimation.

Let $f_{X}\left( x,\theta \right)$ be the joint density of the vector $X=\left( X_{1},...,X_{n}\right)$, where $\theta$ is a parameter (possibly a vector). The ML estimator is obtained by maximizing over $\theta$ the function $f_{X}\left( x,\theta \right)$ with $x=\left(x_{1},...,x_{n}\right)$ fixed at the observed data. The estimator depends on the data and can be denoted $\hat{\theta}_{ML}\left( x\right) .$

Fisher-Neyman theorem. $T\left( X\right)$ is sufficient for $\theta$ if and only if the joint density can be represented as

(1) $f_{X}\left( x,\theta \right) =g\left( T\left( x\right) ,\theta \right) k\left( x\right)$

where, as the notation suggests, $g$ depends on $x$ only through $T\left(x\right)$ and $k$ does not depend on $\theta .$

Maximizing the left side of (1) is the same thing as maximizing $g\left(T\left( x\right) ,\theta \right)$ because $k$ does not depend on $\theta .$ But this means that $\hat{\theta}_{ML}\left( x\right)$ depends on $x$ only through $T\left( x\right) .$ A sufficient statistic is all you need to find the ML estimator. This interpretation is easier to understand than the definition of sufficiency.

### Minimal sufficient statistic

Definition 2. A sufficient statistic $T\left( X\right)$ is called minimal sufficient if for any other statistic $S\left( X\right)$ there exists a function $g$ such that $T\left( X\right) =g\left( S\left( X\right) \right) .$

A level set is a set of type $\left\{ x:T\left( x\right) =c\right\} ,$ for a constant $c$ (which in general can be a constant vector). See the visualization of level sets.  A level set is also called a preimage and denoted $T^{-1}\left( c\right) =\left\{ x:T\left(x\right) =c\right\} .$ When $T$ is one-to-one the preimage contains just one point. When $T$ is not one-to-one the preimage contains more than one point. The wider it is the less information about the sample carries the statistic (because many data sets are mapped to a single point and you cannot tell one data set from another by looking at the statistic value). In the definition of the minimal sufficient statistic we have

$\left\{x:T\left( X\right) =c\right\} =\left\{ x:g\left( S\left( X\right) \right)=c\right\} =\left\{ x:S\left( X\right) \in g^{-1}\left( c\right) \right\} .$

Since $g^{-1}\left( c\right)$ generally contains more than one point, this shows that the level sets of $T\left( X\right)$ are generally wider than those of $S\left( X\right) .$ Since this is true for any $S\left( X\right) ,$ $T\left( X\right)$ carries less information about $X$ than any other statistic.

Definition 2 is an existence statement and is difficult to verify directly as there are words "for any" and "exists". Again it's better to relate it to ML estimation.

Suppose for two sets of data $x,y$ there is a positive number $k\left(x,y\right)$ such that

(2) $f_{X}\left( x,\theta \right) =k\left( x,y\right) f_{X}\left( y,\theta\right) .$

Maximizing the left side we get the estimator $\hat{\theta}_{ML}\left(x\right) .$ Maximizing $f_{X}\left( y,\theta \right)$ we get $\hat{\theta}_{ML}\left( y\right) .$ Since $k\left( x,y\right)$ does not depend on $\theta ,$ (2) tells us that

$\hat{\theta}_{ML}\left( x\right) =\hat{\theta}_{ML}\left( y\right) .$

Thus, if two sets of data $x,y$ satisfy (2), the ML method cannot distinguish between $x$ and $y$ and supplies the same estimator. Let us call $x,y$ indistinguishable if there is a positive number $k\left( x,y\right)$ such that (2) is true.

An equation $T\left( x\right) =T\left( y\right)$ means that $x,y$ belong to the same level set.

Characterization of minimal sufficiency. A statistic $T\left( X\right)$ is minimal sufficient if and only if its level sets coincide with sets of indistinguishable $x,y.$

The advantage of this formulation is that it relates a geometric notion of level sets to the ML estimator properties. The formulation in the guide by J. Abdey is:

A statistic $T\left( X\right)$ is minimal sufficient if and only if the equality $T\left( x\right) =T\left( y\right)$ is equivalent to (2).

Rewriting (2) as

(3) $f_{X}\left( x,\theta \right) /f_{X}\left( y,\theta \right) =k\left(x,y\right)$

we get a practical way of finding a minimal sufficient statistic: form the ratio on the left of (3) and find the sets along which the ratio does not depend on $\theta .$ Those sets will be level sets of $T\left( X\right) .$