Dec 21

Gamma distribution

Gamma distribution

Definition. The gamma distribution Gamma\left( \alpha ,\nu \right) is a two-parametric family of densities. For \alpha >0,\nu >0 the density is defined by

f_{\alpha ,\nu }\left( x\right) =\frac{1}{\Gamma \left( \nu \right) }\alpha ^{\nu }x^{\nu -1}e^{-\alpha x},\ x>0; f_{\alpha ,\nu }\left( x\right) =0,\ x<0.

Obviously, you need to know what is a gamma function. My notation of the parameters follows Feller, W. An Introduction to Probability Theory and its Applications, Volume II, 2nd edition (1971). It is different from the one used by J. Abdey in his guide ST2133.

Property 1

It is really a density because

\frac{1}{\Gamma \left( \nu \right) }\alpha ^{\nu }\int_{0}^{\infty }x^{\nu-1}e^{-\alpha x}dx= (replace \alpha x=t)

=\frac{1}{\Gamma \left( \nu \right) }\alpha ^{\nu }\int_{0}^{\infty }t^{\nu-1}\alpha ^{1-\nu -1}e^{-t}dt=1.

Suppose you see an expression x^{a}e^{-bx} and need to determine which gamma density this is. The power of the exponent gives you \alpha =b and the power of x gives you \nu =a+1. It follows that the normalizing constant should be \frac{1}{\Gamma \left( a+1\right) }b^{a+1} and the density is \frac{1}{\Gamma \left( a+1\right) }b^{a+1}x^{a}e^{-bx}, x>0.

Property 2

The most important property is that the family of gamma densities with the same \alpha is closed under convolutions. Because of the associativity property f_{X}\ast f_{Y}\ast f_{Z}=\left( f_{X}\ast f_{Y}\right) \ast f_{Z} it is enough to prove this for the case of two gamma densities.

First we want to prove

(1) \left( f_{\alpha ,\mu }\ast f_{\alpha ,\nu }\right) \left( x\right) =\frac{\Gamma \left( \mu +\nu \right) }{\Gamma \left( \mu \right) \Gamma\left( \nu \right) }\int_{0}^{1}\left( 1-t\right) ^{\mu -1}t^{\nu-1}dt\times f_{\alpha ,\mu +\nu }(x).

Start with the general definition of convolution and recall where the density vanishes:

\left( f_{\alpha ,\mu }\ast f_{\alpha ,\nu }\right) \left( x\right)=\int_{-\infty }^{\infty }f_{\alpha ,\mu }\left( x-y\right) f_{\alpha ,\nu}\left( y\right) dy=\int_{0}^{x}f_{\alpha ,\mu }\left( x-y\right) f_{\alpha,\nu }\left( y\right) dy

(plug the densities and take out the constants)

=\int_{0}^{x}\left[ \frac{1}{\Gamma \left( \mu \right) }\alpha ^{\mu}\left( x-y\right) ^{\mu -1}e^{-\alpha \left( x-y\right) }\right] \left[\frac{1}{\Gamma \left( \nu \right) }\alpha ^{\nu }y^{\nu -1}e^{-\alpha y}\right] dy
=\frac{\alpha ^{\mu +\nu }e^{-\alpha x}}{\Gamma \left( \mu \right) \Gamma\left( \nu \right) }\int_{0}^{x}\left( x-y\right) ^{\mu -1}y^{\nu -1}dy

(replace y=xt)

=\frac{\Gamma \left( \mu +\nu \right) }{\Gamma \left( \mu \right) \Gamma\left( \nu \right) }\frac{\alpha ^{\mu +\nu }x^{\mu +\nu -1}e^{-\alpha x}}{\Gamma \left( \mu +\nu \right) }\int_{0}^{1}\left( 1-t\right) ^{\mu-1}t^{\nu -1}dt
=\frac{\Gamma \left( \mu +\nu \right) }{\Gamma \left( \mu \right) \Gamma\left( \nu \right) }\int_{0}^{1}\left( 1-t\right) ^{\mu -1}t^{\nu-1}dt\times f_{\alpha ,\mu +\nu }\left( x\right).

Thus (1) is true. Integrating it we have

\int_{R}\left( f_{\alpha ,\mu }\ast f_{\alpha ,\nu }\right) \left( x\right)dx=\frac{\Gamma \left( \mu +\nu \right) }{\Gamma \left( \mu \right) \Gamma\left( \nu \right) }\int_{0}^{1}\left( 1-t\right) ^{\mu -1}t^{\nu-1}dt\times \int_{R}f_{\alpha ,\mu +\nu }\left( x\right) dx.

We know that the convolution of two densities is a density. Therefore the last equation implies

\frac{\Gamma \left( \mu +\nu \right) }{\Gamma \left( \mu \right) \Gamma\left( \nu \right) }\int_{0}^{1}\left( 1-t\right) ^{\mu -1}t^{\nu -1}dt=1


f_{\alpha ,\mu }\ast f_{\alpha ,\nu }=f_{\alpha ,\mu +\nu },\ \mu ,\nu >0.

Alternative proof. The moment generating function of a sum of two independent beta distributions with the same \alpha shows that this sum is again a beta distribution with the same \alpha, see pp. 141, 209 in the guide ST2133.