Let be a random variable. The function where runs over real numbers, is called a distribution function of In statistics, many formulas are derived with the help of The motivation and properties are given here.
Oftentimes, working with the distribution function is an intermediate step to obtain a density using the link
A series of exercises below show just how useful the distribution function is.
Exercise 1. Let be a linear transformation of that is, where and Find the link between and Find the link between and
The more general case of a nonlinear transformation can also be handled:
Exercise 2. Let where is a deterministic function. Suppose that is strictly monotone and differentiable. Then exists. Find the link between and Find the link between and
Solution. The result differs depending on whether is increasing or decreasing. Let's assume the latter, so that is equivalent to Also for simplicity suppose that for any Then
There is a problem I gave on the midterm that does not require much imagination. Just know the definitions and do the technical work, so I was hoping we could put this behind us. Turned out we could not and thus you see this post.
Problem. Suppose the joint density of variables is given by
I. Find .
II. Find marginal densities of . Are independent?
III. Find conditional densities .
IV. Find .
When solving a problem like this, the first thing to do is to give the theory. You may not be able to finish without errors the long calculations but your grade will be determined by the beginning theoretical remarks.
I. Finding the normalizing constant
Any density should satisfy the completeness axiom: the area under the density curve (or in this case the volume under the density surface) must be equal to one: The constant chosen to satisfy this condition is called a normalizing constant. The integration in general is over the whole plain and the first task is to express the above integral as an iterated integral. This is where the domain where the density is not zero should be taken into account. There is little you can do without geometry. One example of how to do this is here.
The shape of the area is determined by a) the extreme values of and b) the relationship between them. The extreme values are 0 and 1 for both and , meaning that is contained in the square The inequality means that we cut out of this square the triangle below the line (it is really the lower triangle because if from a point on the line we move down vertically, will stay the same and will become smaller than ).
In the iterated integral:
a) the lower and upper limits of integration for the inner integral are the boundaries for the inner variable; they may depend on the outer variable but not on the inner variable.
b) the lower and upper limits of integration for the outer integral are the extreme values for the outer variable; they must be constant.
This is illustrated in Pane A of Figure 1.
Figure 1. Integration order
Always take the inner integral in parentheses to show that you are dealing with an iterated integral.
a) In the inner integral integrating over means moving along blue arrows from the boundary to the boundary The boundaries may depend on but not on because the outer integral is over
b) In the outer integral put the extreme values for the outer variable. Thus,
Check that if we first integrate over (vertically along red arrows, see Pane B in Figure 1) then the equation
results.
In fact, from the definition one can see that the inner interval for is and for it is
The condition for independence of is (this is a direct analog of the independence condition for events ). In words: the joint density decomposes into a product of individual densities.
III. Conditional densities
In this case the easiest is to recall the definition of conditional probability The definition of conditional densities is quite similar:
(2) .
Of course, here can be replaced by their marginal equivalents.
IV. Finding expected values of
The usual definition takes an equivalent form using the marginal density:
Which equation to use is a matter of convenience.
Another replacement in the usual definition gives the definition of conditional expectations:
Note that these are random variables: depends in and depends on
Solution to the problem
Being a lazy guy, for the problem this post is about I provide answers found in Mathematica:
I.
II. for for
It is readily seen that the independence condition is not satisfied.
Suppose in a box we have coins and banknotes of only two denominations: $1 and $5 (see Figure 1).
Figure 1. Illustration of two variables
We pull one out randomly. The division of cash by type (coin or banknote) divides the sample space (shown as a square, lower left picture) with probabilities and (they sum to one). The division by denomination ($1 or $5) divides the same sample space differently, see the lower right picture, with the probabilities to pull out $1 and $5 equal to and , resp. (they also sum to one). This is summarized in the tables
Variable 1: Cash type
Prob
coin
banknote
Variable 2: Denomination
Prob
$1
$5
Now we can consider joint events and probabilities (see Figure 2, where the two divisions are combined).
Figure 2. Joint probabilities
For example, if we pull out a random it can be a and $1 and the corresponding probability is The two divisions of the sample space generate a new division into four parts. Then geometrically it is obvious that we have four identities:
Adding over denominations:
Adding over cash types:
Formally, here we use additivity of probability for disjoint events
In words: we can recover own probabilities of variables 1,2 from joint probabilities.
Generalization
Suppose we have two discrete random variables taking values and resp., and their own probabilities are Denote the joint probabilities Then we have the identities
(1) ( equations).
In words: to obtain the marginal probability of one variable (say, ) sum over the values of the other variable (in this case, ).
The name marginal probabilities is used for because in the two-dimensional table they arise as a result of summing table entries along columns or rows and are displayed in the margins.
Analogs for continuous variables with densities
Suppose we have two continuous random variables and their own densities are and Denote the joint density . Then replacing in (1) sums by integrals and probabilities by densities we get
(2)
In words: to obtain one marginal density (say, ) integrate out the other variable (in this case, ).
Suppose we are observing two stocks and their respective returns are To take into account their interdependence, we consider a vector autoregression
(1)
Try to repeat for this system the analysis from Section 3.5 (Application to an AR(1) process) of the Guide by A. Patton and you will see that the difficulties are insurmountable. However, matrix algebra allows one to overcome them, with proper adjustment.
Problem
A) Write this system in a vector format
(2)
What should be in this representation?
B) Assume that the error in (1) satisfies
(3) for with some symmetric matrix
What does this assumption mean in terms of the components of from (2)? What is if the errors in (1) satisfy
(4) for for all
C) Suppose (1) is stationary. The stationarity condition is expressed in terms of eigenvalues of but we don't need it. However, we need its implication:
(5) .
Find
D) Find
E) Find
F) Find
G) Find
Solution
A) It takes some practice to see that with the notation
the system (1) becomes (2).
B) The equations in (3) look like this:
Equalities of matrices are understood element-wise, so we get a series of scalar equations for
Conversely, the scalar equations from (4) give
for .
C) (2) implies or by stationarity or Hence (5) implies
D) From (2) we see that depends only on (information set at time ). Therefore by the LIE
This is the exam I administered in my class in Spring 2022. By replacing the Poisson distribution with other random variables the UoL examiners can obtain a large variety of versions with which to torture Advanced Statistics students. On the other hand, for the students the answers below can be a blueprint to fend off any assaults.
During the semester my students were encouraged to analyze and collect information in documents typed in Scientific Word or LyX. The exam was an open-book online assessment. Papers typed in Scientific Word or LyX were preferred and copying from previous analysis was welcomed. This policy would be my preference if I were to study a subject as complex as Advanced Statistics. The students were given just two hours on the assumption that they had done the preparations diligently. Below I give the model answers right after the questions.
Midterm Spring 2022
You have to clearly state all required theoretical facts. Number all equations that you need to use in later calculations and reference them as necessary. Answer the questions in the order they are asked. When you don't know the answer, leave some space. For each unexplained fact I subtract one point. Put your name in the file name.
In questions 1-9 is the Poisson variable.
Question 1
Define and derive the population mean and population variance of the sum where is an i.i.d. sample from .
Answer. is defined by Using and (ST2133 p.80) we have
(by independence and identical distribution). [Some students derived instead of respective equations for sample means].
Question 2
Derive the MGF of the standardized sample mean.
Answer. Knowing this derivation is a must because it is a combination of three important facts.
a) Let Then so standardizing and gives the same result.
b) The MGF of is expressed through the MGF of :
(independence) (identical distribution)
c) If is a linear transformation of then
When answering the question we assume any i.i.d. sample from a population with mean and population variance :
Putting in c) and using a) we get
(using b) and )
This is a general result which for the Poisson distribution can be specified as follows. From ST2133, example 3.38 we know that . Therefore, we obtain
[Instead of some students gave .]
Question 3
Derive the cumulant generating function of the standardized sample mean.
Answer. Again, there are a couple of useful general facts.
I) Decomposition of MGF around zero. The series leads to
where are moments of and Differentiating this equation yields
and setting gives the rule for finding moments from MGF:
II) Decomposition of the cumulant generating function around zero. can also be decomposed into its Taylor series:
where the coefficients are called cumulants and can be found using . Since
and
we have
Thus, for any random variable with mean and variance we have
terms of higher order for small.
III) If then by c)
IV) By b)
Using III), and then IV) we have
For the last term on the right we use the approximation around zero from II):
[Important. Why the above steps are necessary? Passing from the series to the series for is not straightforward and can easily lead to errors. It is not advisable in case of the Poisson to derive from .]
Question 4
Prove the central limit theorem using the cumulant generating function you obtained.
Answer. In the previous question we proved that around zero
This implies that
(1) for each around zero.
But we know that for a standard normal its MGF is (ST2133 example 3.42) and hence for the standard normal
(2)
Theorem (link between pointwise convergence of MGFs of and convergence in distribution of ) Let be a sequence of random variables and let be some random variable. If converges for each from a neighborhood of zero to , then converges in distribution to
Using (1), (2) and this theorem we finish the proof that converges in distribution to the standard normal, which is the central limit theorem.
Question 5
State the factorization theorem and apply it to show that is a sufficient statistic.
Answer. The solution is given on p.180 of ST2134. For the joint density is
and then we see that is a sufficient statistic for
Question 6
Find a minimal sufficient statistic for stating all necessary theoretical facts.
Answer. Characterization of minimal sufficiency A statistic is minimal sufficient if and only if level sets of coincide with sets on which the ratio does not depend on
From (3)
The expression on the right does not depend on if and only of The last condition describes level sets of Thus it is minimal sufficient.
Question 7
Find the Method of Moments estimator of the population mean.
Answer. The idea of the method is to take some populational property (for example, ) and replace the population characteristic (in this case ) by its sample analog () to obtain a MM estimator. In our case [Try to do this for the Gamma distribution].
Question 8
Find the Fisher information.
Answer. From Problem 5 the log-likelihood is
Hence the score function is (see Example 2.30 in ST2134)
Then
and the Fisher information is
Question 9
Derive the Cramer-Rao lower bound for for a random sample.
Answer. (See Example 3.17 in ST2134) Since is an unbiased estimator of by Problem 1, from the Cramer-Rao theorem we know that
and in fact by Problem 1 this lower bound is attained.
Here we show that the knowledge of the distribution of for linear regression allows one to do without long calculations contained in the guide ST 2134 by J. Abdey.
Theorem. Let be independent observations from . 1) is distributed as 2) The estimators and are independent. 3) 4) 5) converges in distribution to
Proof. We can write where is distributed as Putting and (a vector of ones) we satisfy (1) and (2). Since we have Further,
Distribution of the estimator of the error variance
If you are reading the book by Dougherty: this post is about the distribution of the estimator defined in Chapter 3.
Consider regression
(1)
where the deterministic matrix is of size satisfies (regressors are not collinear) and the error satisfies
(2)
is estimated by Denote Using (1) we see that and the residual is estimated by
(3)
is a projector and has properties which are derived from those of
(4)
If is an eigenvalue of then multiplying by and using the fact that we get Hence eigenvalues of can be only or The equation
tells us that the number of eigenvalues equal to 1 is and the remaining are zeros. Let be the diagonal representation of Here is an orthogonal matrix,
(5)
and is a diagonal matrix with eigenvalues of on the main diagonal. We can assume that the first numbers on the diagonal of are ones and the others are zeros.
Theorem. Let be normal. 1) is distributed as 2) The estimators and are independent.
Proof. 1) We have by (4)
(6)
Denote From (2) and (5)
and is normal as a linear transformation of a normal vector. It follows that where is a standard normal vector with independent standard normal coordinates Hence, (6) implies
(7)
(3) and (7) prove the first statement.
2) First we note that the vectors are independent. Since they are normal, their independence follows from
It's easy to see that This allows us to show that is a function of :
Independence of leads to independence of their functions and
I find that in the notation of a statistic it is better to reflect the dependence on the argument. So I write for a statistic, where is a sample, instead of a faceless or
Definition 1. The statistic is called sufficient for the parameter if the distribution of conditional on does not depend on
The main results on sufficiency and minimal sufficiency become transparent if we look at them from the point of view of Maximum Likelihood (ML) estimation.
Let be the joint density of the vector , where is a parameter (possibly a vector). The ML estimator is obtained by maximizing over the function with fixed at the observed data. The estimator depends on the data and can be denoted
Fisher-Neyman theorem. is sufficient for if and only if the joint density can be represented as
(1)
where, as the notation suggests, depends on only through and does not depend on
Maximizing the left side of (1) is the same thing as maximizing because does not depend on But this means that depends on only through A sufficient statistic is all you need to find the ML estimator. This interpretation is easier to understand than the definition of sufficiency.
Minimal sufficient statistic
Definition 2. A sufficient statistic is called minimal sufficient if for any other statistic there exists a function such that
A level set is a set of type for a constant (which in general can be a constant vector). See the visualization of level sets. A level set is also called a preimage and denoted When is one-to-one the preimage contains just one point. When is not one-to-one the preimage contains more than one point. The wider it is the less information about the sample carries the statistic (because many data sets are mapped to a single point and you cannot tell one data set from another by looking at the statistic value). In the definition of the minimal sufficient statistic we have
Since generally contains more than one point, this shows that the level sets of are generally wider than those of Since this is true for any carries less information about than any other statistic.
Definition 2 is an existence statement and is difficult to verify directly as there are words "for any" and "exists". Again it's better to relate it to ML estimation.
Suppose for two sets of data there is a positive number such that
(2)
Maximizing the left side we get the estimator Maximizing we get Since does not depend on (2) tells us that
Thus, if two sets of data satisfy (2), the ML method cannot distinguish between and and supplies the same estimator. Let us call indistinguishable if there is a positive number such that (2) is true.
An equation means that belong to the same level set.
Characterization of minimal sufficiency. A statistic is minimal sufficient if and only if its level sets coincide with sets of indistinguishable
The advantage of this formulation is that it relates a geometric notion of level sets to the ML estimator properties. The formulation in the guide by J. Abdey is:
A statistic is minimal sufficient if and only if the equality is equivalent to (2).
Rewriting (2) as
(3)
we get a practical way of finding a minimal sufficient statistic: form the ratio on the left of (3) and find the sets along which the ratio does not depend on Those sets will be level sets of
We have derived the density of the chi-squared variable with one degree of freedom, see also Example 3.52, J. Abdey, Guide ST2133.
General chi-squared
For with independent standard normals we can write where the chi-squared variables on the right are independent and all have one degree of freedom. This is because deterministic (here quadratic) functions of independent variables are independent.
Definition. The gamma distribution is a two-parametric family of densities. For the density is defined by
Obviously, you need to know what is a gamma function. My notation of the parameters follows Feller, W. An Introduction to Probability Theory and its Applications, Volume II, 2nd edition (1971). It is different from the one used by J. Abdey in his guide ST2133.
Property 1
It is really a density because
(replace )
Suppose you see an expression and need to determine which gamma density this is. The power of the exponent gives you and the power of gives you It follows that the normalizing constant should be and the density is
Property 2
The most important property is that the family of gamma densities with the same is closed under convolutions. Because of the associativity property it is enough to prove this for the case of two gamma densities.
Alternative proof. The moment generating function of a sum of two independent beta distributions with the same shows that this sum is again a beta distribution with the same , see pp. 141, 209 in the guide ST2133.
You must be logged in to post a comment.