Regarding the true state of nature we assume two mutually exclusive possibilities: the null hypothesis (like the suspect is guilty) and alternative hypothesis (the suspect is innocent). It's up to us what to call the null and what to call the alternative. However, the statistical procedures are not symmetric: it's easier to measure the probability of rejecting the null when it is true than other involved probabilities. This is why what is desirable to prove is usually designated as the alternative.
Usually in books you can see the following table.
Fail to reject null
State of nature
Null is true
Type I error
Null is false
Type II error
This table is not good enough because there is no link to probabilities. The next video does fill in the blanks.
This will be a simple post explaining the common observation that "in Economics, variability of many variables is proportional to those variables". Make sure to review the assumptions; they tend to slip from memory. We consider the simple regression
One of classical assumptions is
Homoscedasticity. All errors have the same variances: for all .
We discuss its opposite, which is
Heteroscedasticity. Not all errors have the same variance. It would be wrong to write it as for all (which means that all errors have variance different from ). You can write that not all are the same but it's better to use the verbal definition.
Remark about Video 1. The dashed lines can represent mean consumption. Then the fact that variation of a variable grows with its level becomes more obvious.
Video 1. Case for heteroscedasticity
Figure 1. Illustration from Dougherty: as x increases, variance of the error term increases
Homoscedasticity was used in the derivation of the OLS estimator variance; under heteroscedasticity that expression is no longer valid. There are other implications, which will be discussed later.
Companies example. The Samsung Galaxy Note 7 battery fires and explosions that caused two recalls cost the smartphone maker at least $5 billion. There is no way a small company could have such losses.
GDP example. The error in measuring US GDP is on the order of $200 bln, which is comparable to the Kazakhstan GDP. However, the standard deviation of the ratio error/GDP seems to be about the same across countries, if the underground economy is not too big. Often the assumption that the standard deviation of the regression error is proportional to one of regressors is plausible.
To see if the regression error is heteroscedastic, you can look at the graph of the residuals or use statistical tests.
This is a large topic which requires several posts or several book chapters. During a conference in Sweden in 2010, a Swedish statistician asked me: "What is Econometrics, anyway? What tools does it use?" I said: "Among others, it uses linear regression." He said: "But linear regression is a general statistical tool, why do they say it's a part of Econometrics?" My answer was: "Yes, it's a general tool but the name Econometrics emphasizes that the motivation for its applications lies in Economics".
Both classical assumptions and their violations should be studied with this point in mind: What is the Economics and Math behind each assumption?
A6. The model is linear in parameters and correctly specified.
When it is not linear in parameters, you can think of nonlinear alternatives. Instead of saying "correctly specified" I say "true model" when a "wrong model" is available.
A1. What if the existence condition is violated? If variance of the regressor is zero, the OLS estimator does not exist. The fitted line is supposed to be vertical, and you can regress on . Violation of the existence condition in case of multiple regression leads to multicollinearity, and that's where economic considerations are important.
A2. The convenience condition is called so because when it is violated, that is, the regressor is stochastic, there are ways to deal with this problem: finite-sample theory and large-sample theory.
A3. What if the errors in (1) have means different from zero? This question can be divided in two: 1) the means of the errors are the same: for all and 2) the means are different. Read the post about centering and see if you can come up with the answer for the first question. The means may be different because of omission of a relevant variable (can you do the math?). In the absence of data on such a variable, there is nothing you can do.
Here we explain the idea, illustrate the possible problems in Mathematica and, finally, show the implementation in Stata.
Idea: minimize RSS, as in ordinary least squares
Observations come in pairs . In case of ordinary least squares, we approximated the y's with linear functions of the parameters, possibly nonlinear in x's. Now we use a function which may be nonlinear in . We still minimize RSS which takes the form . Nonlinear least squares estimators are the values that minimize RSS. In general, it is difficult to find the formula (closed-form solution), so in practice software, such as Stata, is used for RSS minimization.
Simplified idea and problems in one-dimensional case
Suppose we want to minimize . The Newton algorithm (default in Stata) is an iterative procedure that consists of steps:
Select the initial value .
Find the derivative (or tangent) of RSS at . Make a small step in the descent direction (indicated by the derivative), to obtain the next value .
Repeat Step 2, using as the starting point, until the difference between the values of the objective function at two successive points becomes small. The last point will approximate the minimizing point.
The minimizing point may not exist.
When it exists, it may not be unique. In general, there is no way to find out how many local minimums there are and which ones are global.
The minimizing point depends on the initial point.
See Video 1 for illustration in the one-dimensional case.
Video 1. NLS geometry
Problems illustrated in Mathematica
Here we look at three examples of nonlinear functions, two of which are considered in Dougherty. The first one is a power functions (it can be linearized applying logs) and the second is an exponential function (it cannot be linearized). The third function gives rise to two minimums. The possibilities are illustrated in Mathematica.
Video 2. NLS illustrated in Mathematica
Finally, implementation in Stata
Here we show how to 1) generate a random vector, 2) create a vector of initial values, and 3) program a nonlinear dependence.
In this post we looked at dependence of EARNINGS on S (years of schooling). In the end I suggested to think about possible variations of the model. Specifically, could the dependence be nonlinear? We consider two answers to this question.
This name is used for the quadratic dependence of the dependent variable on the independent variable. For our variables the dependence is
Note that the dependence on S is quadratic but the right-hand side is linear in the parameters, so we still are in the realm of linear regression. Video 1 shows how to run this regression.
Video 1. Running quadratic regression in Stata
The general way to write this model is
The beauty and power of nonparametric regression consists in the fact that we don't need to specify the functional form of dependence of on . Therefore there are no parameters to interpret, there is only the fitted curve. There is also the estimated equation of the nonlinear dependence, which is too complex to consider here. I already illustrated the difference between parametric and nonparametric regression. See in Video 2 how to run nonparametric regression in Stata.
Running simple regression in Stata is, well, simple. It's just a matter of a couple of clicks. Try to make it a small research.
Obtain descriptive statistics for your data (Statistics > Summaries, tables, and tests > Summary and descriptive statistics > Summary statistics). Look at all that stuff you studied in introductory statistics: units of measurement, means, minimums, maximums, and correlations. Knowing the units of measurement will be important for interpreting regression results; correlations will predict signs of coefficients, etc. In your report, don't just mechanically repeat all those measures; try to find and discuss something interesting.
Visualize your data (Graphics > Twoway graph). On the graph you can observe outliers and discern possible nonlinearity.
After running regression, report the estimated equation. It is called a fitted line and in our case looks like this: Earnings = -13.93+2.45*S (use descriptive names and not abstract X,Y). To see if the coefficient of S is significant, look at its p-value, which is smaller than 0.001. This tells us that at all levels of significance larger than or equal to 0.001 the null that the coefficient of S is significant is rejected. This follows from the definition of p-value. Nobody cares about significance of the intercept. Report also the p-value of the F statistic. It characterizes significance of all nontrivial regressors and is important in case of multiple regression. The last statistic to report is R squared.
Think about possible variations of the model. Could the dependence of Earnings on S be nonlinear? What other determinants of Earnings would you suggest from among the variables in Dougherty's file?
Figure 1. Looking at data. For data, we use a scatterplot.
Figure 2. Running regression (Statistics > Linear models and related > Linear regression)
Autoregressive–moving-average (ARMA) models were suggested in 1951 by Peter Whittle in his PhD thesis. Do you think he played with data and then came up with his model? No, he was guided by theory. The same model may describe visually very different data sets, and visualization rarely leads to model formulation.
Recall that the main idea behind autoregressive processes is to regress the variable on its own past values. In case of moving averages, we form linear combinations of elements of white noise. Combining the two ideas, we obtain the definition of the autoregressive–moving-average process:
It is denoted ARMA(p,q), where p is the number of included past values and q is the number of included past errors (AKA shocks to the system). We should expect a couple of facts to hold for this process.
The quantity can be called an instantaneous effect of on . This effect accumulates over time (the value at is influenced by the value at , which, in turn, is influenced by the value at and so on). Therefore the long-run interpretation of the coefficients is complicated. Comparison of Figures 1 and 2 illustrates this point.
Exercise. For the model find the mean (just modify this argument).
Figure 1. Simulated AR process
Figure 2. Simulated MA
Question. Why in (1) the current error has coefficient 1? Choose the answer you like:
1) We never used the current error with a nontrivial coefficient.
2) It is logical to assume that past shocks may have an aftereffect (measured by thetas) on the current different from 1 but the effect of the current shock should be 1.
3) Mathematically, the case when instead of we have with some nonzero can be reduced to the case when the current error has coefficient 1. Just introduce a new white noise and rewrite the model using it.
Moving average processes: this time the intuition is mathematical and even geometric.
Review and generalize
Science is a vertical structure, as I say in my book, and we have long past the point after which looking back is as important as looking forward. So here are a couple of questions for the reader to review the past material.
Q1. What is a stochastic process? (Answer: imagine a real line with a random variable attached to each integer point.)
Q2. There are good (stationary) processes and bad (all other) processes. How do you define the good ones? (Hint: Properties of means, covariances and variances are bread and butter of professionals.)
Q3. White noise is a simplest (after a constant) type of a stationary process. Give the definition, and don't hope for a hint. Do you realize that the elements of white noise don't interact with one another in the sense that covariance between each two of them is zero?
Idea. Define a class of stationary processes by forming linear combinations of elements of white noise.
Q4. A simple realization of this idea is given here. How do you generalize it?
Answer. The process
where is white noise, is called a moving average process of order and denoted MA(q).
Remarks. 1) The "moving average" name may be misleading. In Finance we use that name when the coefficients sum to one and are positive. Here the thetas do not necessarily sum to one and may change sign.
2) It would be better to say a "moving linear combination". The coefficients of the linear combination do not change but are applied to a moving segment of the white noise, starting from the element dated and going back to the element dated . In this sense we say that (1) involves the segment .
3) In Economics and Finance, the errors are treated as shocks. (1) tells us that the process is a result of the current shock and previous shocks.
Moving average properties
First stationarity condition. , should be absolutely obvious by now.
Second stationarity condition. Variance does not depend on time:
because only products have nonzero expectations.
Third stationarity condition. Here is where geometry is useful. If one linear combination involves the segment and the other - the segment , then under what condition these segments do not overlap? Answer: if the distance between the points is larger than . In this case the linear combinations do not have common elements and is zero.
Exercise. (I leave you the tedious part). Calculate for .
Figure 1. Electricity load in France and Great Britain for 2001 to 2006
If you have only one variable, what can you regress it on? Only on its own past values (future values are not available at any given moment). Figure 1 on electricity demand from a paper by J.W. Taylor illustrates this. A low value of electricity demand, say, in summer last year, will drive down its value in summer this year. Overall, we would expect the electricity demand now to depend on its values in the past 12 months. Another important observation from this example is that probably this time series is stationary.
We want a definition of a class of stationary models. From this example we see that excluding the time trend increases chances of obtaining a stationary process. The idea to regress the process on its own past values is realized in
Here is some positive integer. However, both this example and the one about random walk show that some condition on the coefficients will be required for (1) to be stationary. (1) is called an autoregressive process of order and denoted AR(p).
Exercise 1. Repeat calculations on AR(1) process to see that in case for (1) the stability condition is sufficient for stationarity (that is, the coefficient has no impact on stationarity).
Question. How does this stability condition generalize to AR(p)?
Denote the lag operator defined by . More generally, its powers are defined by . Then (1) can be rewritten as
Whoever first did this wanted to solve the equation for . Sending all terms containing to the left we have
The identity operator is defined by , so . Factoring out we get
Finally, formally solving for we have
Definition 1. In replace the identity by 1 and powers of the lag operator by powers of a real number to obtain the definition of the characteristic polynomial:
Definition 2. We say that model (1) is stable if its characteristic polynomial (3) has roots outside the unit circle, that is, the roots are larger than 1 in absolute value.
Under this stability condition the passage from (2) to (3) can be justified. For AR(1) process this actually has been done.
Example 1. In case of a first-order process, has one root which lies outside the unit circle exactly when
Example 2. In case of a second-order process, has two roots. If both of them are larger than 1 in absolute value, then the process is stable. The formula for the roots of a quadratic equation is well-known but stating it here wouldn't add much to what we know. Most statistical packages, including Stata, have procedures for checking stability.
Remark. Hamilton uses a different definition of the characteristic polynomial (linked to vector autoregressions), that's why in his definition the roots of the characteristic equation should lie inside the unit circle.