Full solution to Example 2.15 from the guide ST2134
Students tend to miss the ideas needed for this example for two reasons: in the guide there is a reference to ST104b Statistics 2, which nobody consults, and the short notation of the statistic conceals the essence.
Recommendation: for the statistic always use rather than Similarly, if is the observed sample, use rather than
Example 2.15. Let be a random sample (meaning i.i.d. variables) from a distribution, and let Show that is a sufficient statistic.
Solution. There are two ways to solve this problem. One is to use the definition of a sufficient statistic, and the other is to apply the sufficiency principle. It is a good idea to announce which way you go in the very beginning. We apply the definition, and we have to show that the density of conditional on does not depend on
Step 1. The density of is and by independence
Step 2. We need to characterize the distribution of and this is accomplished with the MGF. For one Poisson variable we have Here we used the completeness axiom with
Step 3. This result for the sum implies (by independence) (since have identical distribution) (by Step 2). By Step 2 we know that implies As we just showed, the MGF of is the same. The uniqueness theorem says:
if random variables and have the same MGF's then their distributions are the same.
It follows that and that (which is written as in the guide, and to me this is not transparent).
Step 4. To check that the conditional density does not depend on the parameter, we recall that the conditional density along the level set simplifies to (see Guide, p.30) (no joint density in the numerator). In our situation the full expression for the ratio on the right is As there is no in the result, is sufficient for
Exercise. Do Example 2.16 from the Guide following this format.
Unlike most UoL exams, here I tried to relate the theory to practical issues.
KBTU International School of Economics
Compiled by Kairat Mynbaev
The total for this exam is 41 points. You have two hours.
Everywhere provide detailed explanations. When answering please clearly indicate question numbers. You don’t need a calculator. As long as the formula you provide is correct, the numerical value does not matter.
Question 1. (12 points)
a) (2 points) At a casino, two players are playing on slot machines. Their payoffs are standard normal and independent. Find the joint density of the payoffs.
b) (4 points) Two other players watch the first two players and start to argue what will be larger: the sum or the difference . Find the joint density. Are variables independent? Find their marginal densities.
c) (2 points) Are normal? Why? What are their means and variances?
d) (2 points) Which probability is larger: or ?
e) (2 points) In this context interpret the conditional expectation . How much is it?
Reminder. The density of a normal variable is .
Question 2. (9 points) The distribution of a call duration of one Kcell [largest mobile operator in KZ] customer is exponential: The number of customers making calls simultaneously is distributed as Poisson: Thus the total call duration for all customers is for . We put . Assume that customers make their decisions about calling independently.
a) (3 points) Find the general formula (when are identically distributed and are independent but not necessarily exponential and Poisson, as above) for the moment generating function of explaining all steps.
b) (3 points) Find the moment generating functions of , and for your particular distributions.
c) (3 points) Find the mean and variance of . Based on the equations you obtained, can you suggest estimators of parameters ?
Remark. Direct observations on the exponential and Poisson distributions are not available. We have to infer their parameters by observing . This explains the importance of the technique used in Question 2.
Question 3. (8 points)
a) (2 points) For a non-negative random variable prove the Markov inequality
b) (2 points) Prove the Chebyshev inequality for an arbitrary random variable .
c) (4 points) We say that the sequence of random variables converges in probability to a random variable if as for any . Â Suppose that for all and that as . Prove that then converges in probability to .
Remark. Question 3 leads to the simplest example of a law of large numbers: if are i.i.d. with finite variance, then their sample mean converges to their population mean in probability.
Question 4. (8 points)
a) (4 points) Define a distribution function. Give its properties, with intuitive explanations.
b) (4 points) Is a sum of two distribution functions a distribution function? Is a product of two distribution functions a distribution function?
Remark. The answer for part a) is here and the one for part b) is based on it.
Question 5. (4 points) The Rakhat factory prepares prizes for kids for the upcoming New Year event. Each prize contains one type of chocolates and one type of candies. The chocolates and candies are chosen randomly from two production lines, the total number of items is always 10 and all selections are equally likely.
a) (2 points) What proportion of prepared prizes contains three or more chocolates?
b) (2 points) 100 prizes have been sent to an orphanage. What is the probability that 50 of those prizes contain no more than two chocolates?
Suppose we are observing two stocks and their respective returns are A vector autoregression for the pair is one way to take into account their interdependence. This theory is undeservedly omitted from the Guide by A. Patton.
Matrix multiplication is a little more complex. Make sure to read Global idea 2 and the compatibility rule.
The general approach to study matrices is to compare them to numbers. Here you see the first big No: matrices do not commute, that is, in general
The idea behind matrix inversion is pretty simple: we want an analog of the property that holds for numbers.
Some facts about determinants have very complicated proofs and it is best to stay away from them. But a couple of ideas should be clear from the very beginning. Determinants are defined only for square matrices. The relationship of determinants to matrix invertibility explains the role of determinants. If is square, it is invertible if and only if (this is an equivalent of the condition for numbers).
Here is an illustration of how determinants are used. Suppose we need to solve the equation for where and are known. Assuming that we can premultiply the equation by to obtain (Because of lack of commutativity, we need to keep the order of the factors). Using intuitive properties and we obtain the solution: In particular, we see that if then the equation has a unique solution
Let be a square matrix and let be two vectors. are assumed to be known and is unknown. We want to check that solves the equation (Note that for this equation the trick used to solve does not work.) Just plug
(write out a couple of first terms in the sums if summation signs frighten you).
Transposition is a geometrically simple operation. We need only the property
Variance and covariance
Property 1. Variance of a random vector and covariance of two random vectors are defined by
respectively.
Note that when variance becomes
Property 2. Let be random vectors and suppose are constant matrices. We want an analog of In the next calculation we have to remember that the multiplication order cannot be changed.
This is the exam I administered in my class in Spring 2022. By replacing the Poisson distribution with other random variables the UoL examiners can obtain a large variety of versions with which to torture Advanced Statistics students. On the other hand, for the students the answers below can be a blueprint to fend off any assaults.
During the semester my students were encouraged to analyze and collect information in documents typed in Scientific Word or LyX. The exam was an open-book online assessment. Papers typed in Scientific Word or LyX were preferred and copying from previous analysis was welcomed. This policy would be my preference if I were to study a subject as complex as Advanced Statistics. The students were given just two hours on the assumption that they had done the preparations diligently. Below I give the model answers right after the questions.
Midterm Spring 2022
You have to clearly state all required theoretical facts. Number all equations that you need to use in later calculations and reference them as necessary. Answer the questions in the order they are asked. When you don't know the answer, leave some space. For each unexplained fact I subtract one point. Put your name in the file name.
In questions 1-9 is the Poisson variable.
Question 1
Define and derive the population mean and population variance of the sum where is an i.i.d. sample from .
Answer. is defined by Using and (ST2133 p.80) we have
(by independence and identical distribution). [Some students derived instead of respective equations for sample means].
Question 2
Derive the MGF of the standardized sample mean.
Answer. Knowing this derivation is a must because it is a combination of three important facts.
a) Let Then so standardizing and gives the same result.
b) The MGF of is expressed through the MGF of :
(independence) (identical distribution)
c) If is a linear transformation of then
When answering the question we assume any i.i.d. sample from a population with mean and population variance :
Putting in c) and using a) we get
(using b) and )
This is a general result which for the Poisson distribution can be specified as follows. From ST2133, example 3.38 we know that . Therefore, we obtain
[Instead of some students gave .]
Question 3
Derive the cumulant generating function of the standardized sample mean.
Answer. Again, there are a couple of useful general facts.
I) Decomposition of MGF around zero. The series leads to
where are moments of and Differentiating this equation yields
and setting gives the rule for finding moments from MGF:
II) Decomposition of the cumulant generating function around zero. can also be decomposed into its Taylor series:
where the coefficients are called cumulants and can be found using . Since
and
we have
Thus, for any random variable with mean and variance we have
terms of higher order for small.
III) If then by c)
IV) By b)
Using III), and then IV) we have
For the last term on the right we use the approximation around zero from II):
[Important. Why the above steps are necessary? Passing from the series to the series for is not straightforward and can easily lead to errors. It is not advisable in case of the Poisson to derive from .]
Question 4
Prove the central limit theorem using the cumulant generating function you obtained.
Answer. In the previous question we proved that around zero
This implies that
(1) for each around zero.
But we know that for a standard normal its MGF is (ST2133 example 3.42) and hence for the standard normal
(2)
Theorem (link between pointwise convergence of MGFs of and convergence in distribution of ) Let be a sequence of random variables and let be some random variable. If converges for each from a neighborhood of zero to , then converges in distribution to
Using (1), (2) and this theorem we finish the proof that converges in distribution to the standard normal, which is the central limit theorem.
Question 5
State the factorization theorem and apply it to show that is a sufficient statistic.
Answer. The solution is given on p.180 of ST2134. For the joint density is
and then we see that is a sufficient statistic for
Question 6
Find a minimal sufficient statistic for stating all necessary theoretical facts.
Answer. Characterization of minimal sufficiency A statistic is minimal sufficient if and only if level sets of coincide with sets on which the ratio does not depend on
From (3)
The expression on the right does not depend on if and only of The last condition describes level sets of Thus it is minimal sufficient.
Question 7
Find the Method of Moments estimator of the population mean.
Answer. The idea of the method is to take some populational property (for example, ) and replace the population characteristic (in this case ) by its sample analog () to obtain a MM estimator. In our case [Try to do this for the Gamma distribution].
Question 8
Find the Fisher information.
Answer. From Problem 5 the log-likelihood is
Hence the score function is (see Example 2.30 in ST2134)
Then
and the Fisher information is
Question 9
Derive the Cramer-Rao lower bound for for a random sample.
Answer. (See Example 3.17 in ST2134) Since is an unbiased estimator of by Problem 1, from the Cramer-Rao theorem we know that
and in fact by Problem 1 this lower bound is attained.
Here we show that the knowledge of the distribution of for linear regression allows one to do without long calculations contained in the guide ST 2134 by J. Abdey.
Theorem. Let be independent observations from . 1) is distributed as 2) The estimators and are independent. 3) 4) 5) converges in distribution to
Proof. We can write where is distributed as Putting and (a vector of ones) we satisfy (1) and (2). Since we have Further,
Distribution of the estimator of the error variance
If you are reading the book by Dougherty: this post is about the distribution of the estimator defined in Chapter 3.
Consider regression
(1)
where the deterministic matrix is of size satisfies (regressors are not collinear) and the error satisfies
(2)
is estimated by Denote Using (1) we see that and the residual is estimated by
(3)
is a projector and has properties which are derived from those of
(4)
If is an eigenvalue of then multiplying by and using the fact that we get Hence eigenvalues of can be only or The equation
tells us that the number of eigenvalues equal to 1 is and the remaining are zeros. Let be the diagonal representation of Here is an orthogonal matrix,
(5)
and is a diagonal matrix with eigenvalues of on the main diagonal. We can assume that the first numbers on the diagonal of are ones and the others are zeros.
Theorem. Let be normal. 1) is distributed as 2) The estimators and are independent.
Proof. 1) We have by (4)
(6)
Denote From (2) and (5)
and is normal as a linear transformation of a normal vector. It follows that where is a standard normal vector with independent standard normal coordinates Hence, (6) implies
(7)
(3) and (7) prove the first statement.
2) First we note that the vectors are independent. Since they are normal, their independence follows from
It's easy to see that This allows us to show that is a function of :
Independence of leads to independence of their functions and
I find that in the notation of a statistic it is better to reflect the dependence on the argument. So I write for a statistic, where is a sample, instead of a faceless or
Definition 1. The statistic is called sufficient for the parameter if the distribution of conditional on does not depend on
The main results on sufficiency and minimal sufficiency become transparent if we look at them from the point of view of Maximum Likelihood (ML) estimation.
Let be the joint density of the vector , where is a parameter (possibly a vector). The ML estimator is obtained by maximizing over the function with fixed at the observed data. The estimator depends on the data and can be denoted
Fisher-Neyman theorem. is sufficient for if and only if the joint density can be represented as
(1)
where, as the notation suggests, depends on only through and does not depend on
Maximizing the left side of (1) is the same thing as maximizing because does not depend on But this means that depends on only through A sufficient statistic is all you need to find the ML estimator. This interpretation is easier to understand than the definition of sufficiency.
Minimal sufficient statistic
Definition 2. AÂ sufficient statistic is called minimal sufficient if for any other statistic there exists a function such that
A level set is a set of type for a constant (which in general can be a constant vector). See the visualization of level sets. A level set is also called a preimage and denoted When is one-to-one the preimage contains just one point. When is not one-to-one the preimage contains more than one point. The wider it is the less information about the sample carries the statistic (because many data sets are mapped to a single point and you cannot tell one data set from another by looking at the statistic value). This is illustrated by the following example.
Example 1. On the plane define two statistics: and . For the level set , consists of just one point and knowing the statistic value is equivalent to knowing the whole sample. For the level set is a straight line. If we know , we know the sample mean but not the separate observations.
In the definition of the minimal sufficient statistic we have
Since generally contains more than one point, this shows that the level sets of are generally wider than those of Since this is true for any carries less information about than any other statistic.
Definition 2 is an existence statement and is difficult to verify directly as there are words "for any" and "exists". Again it's better to relate it to ML estimation.
Suppose for two sets of data there is a positive number such that
(2)
Maximizing the left side we get the estimator Maximizing we get Since does not depend on (2) tells us that
Thus, if two sets of data satisfy (2), the ML method cannot distinguish between and and supplies the same estimator. Let us call indistinguishable if there is a positive number such that (2) is true.
An equation means that belong to the same level set.
Characterization of minimal sufficiency. A statistic is minimal sufficient if and only if its level sets coincide with sets of indistinguishable
The advantage of this formulation is that it relates a geometric notion of level sets to the ML estimator properties. The formulation in the guide by J. Abdey is:
A statistic is minimal sufficient if and only if the equality is equivalent to (2).
Rewriting (2) as
(3)
we get a practical way of finding a minimal sufficient statistic: form the ratio on the left of (3) and find the sets along which the ratio does not depend on Those sets will be level sets of
We have derived the density of the chi-squared variable with one degree of freedom, see also Example 3.52, J. Abdey, Guide ST2133.
General chi-squared
For with independent standard normals we can write where the chi-squared variables on the right are independent and all have one degree of freedom. This is because deterministic (here quadratic) functions of independent variables are independent.
Definition. The gamma distribution is a two-parametric family of densities. For the density is defined by
Obviously, you need to know what is a gamma function. My notation of the parameters follows Feller, W. An Introduction to Probability Theory and its Applications, Volume II, 2nd edition (1971). It is different from the one used by J. Abdey in his guide ST2133.
Property 1
It is really a density because
 (replace )
Suppose you see an expression and need to determine which gamma density this is. The power of the exponent gives you and the power of gives you It follows that the normalizing constant should be and the density is
Property 2
The most important property is that the family of gamma densities with the same is closed under convolutions. Because of the associativity property it is enough to prove this for the case of two gamma densities.
Alternative proof. The moment generating function of a sum of two independent beta distributions with the same shows that this sum is again a beta distribution with the same , see pp. 141, 209 in the guide ST2133.
The gamma function and gamma distribution are two different things. This post is about the former and is a preparatory step to study the latter.
Definition. The gamma function is defined by
The integrand is smooth on so its integrability is determined by its behavior at and . Because of the exponent, it is integrable in the neighborhood of The singularity at is integrable if In all calculations involving the gamma function one should remember that its argument should be positive.
Properties
1) Factorial-like property. Integration by parts shows that
if
2) because
3) Combining the first two properties we see that for a natural
Thus the gamma function extends the factorial to non-integer
4)
Indeed, using the density of the standard normal we see that
You must be logged in to post a comment.