21
Jan 23

## Excel for mass education

### Problem statement

The Covid with its lockdowns has posed a difficult question: how do you teach online and preclude cheating by students? How do you do that efficiently with a large number of students and without lowering the teaching standards? I think the answer depends on what you teach. Using Excel made my course very attractive because many students adore learning Excel functions.

### Suggested solution

Last year I taught Financial Econometrics. The topic was Portfolio Optimization using the Sharpe ratio. The idea was to give the students Excel files with individual data sizes so that they have to do the calculations themselves. Those who tried to obtain a file from another student and send it to me under their own names were easily identified. I punished both the giver and receiver of the file. Some steps for assignment preparation and report checking may be very time consuming if you don’t automate them. In the following list, the starred steps are the ones that may take a lot of time with large groups of students.

Step 1. I download data for several stocks from Yahoo Finance and put them in one Excel file where I have the students’ list (Video 1).

Step 2. For each student I randomly choose the sample size for the chunk of data to be selected from the data I downloaded. The students are required to use the whole sample in their calculations.

Step 3*. For creating individual student files with assignments, I use a Visual Basic macro. It reads a student name, his or her sample size, creates an Excel file, pastes there the appropriate sample and saves the file under that student’s name (Video 2).

Step 4*. In Gmail I prepare messages with individual Excel files. Gmail has an option for scheduling emails (Video 3). Outlook.com also has this feature but it requires too many clicks.

Step 5. The test is administered using MS Teams. In the beginning of the test, I give the necessary oral instructions and post the assignment description (which is common for all students). The emails are scheduled to be sent 10 minutes after the session start. The time for the test is just enough to do calculations in Excel. I cannot control how the students do it nor can I see if they share screens to help each other. But I know that the task is difficult enough, so one needs to be familiar with the material in order to accomplish the task, even when one sees on the screen how somebody else is doing it.

Step 6*. Upon completion of the test, the students email me their files. The messages arrival times are recorded by Gmail. I have to check the files and post the grades (video 4).

### Skills to test

Portfolio Optimization involves the following steps.

a) For each stock one has to find daily rates of return.

b) Using arbitrary initial portfolio shares, the daily rates of return on the portfolio are calculated. I require the students to use matrix multiplication for this, which makes checking their work easier.

c) The daily rates of return on the portfolio are used to find the average return, standard deviation and Sharpe ratio for the portfolio. The fact that after all these calculations the students have to obtain a single number also simplifies verification.

d) Finally, the students have to optimize the portfolio shares using the Solver add-in.

The list above is just an example. The task can be expanded to check the knowledge of other elements of matrix algebra, Econometrics and/or Finance. In one of my assignments, I required my students to run a multiple regression. The Excel add-in called Data Analysis allows one to do that easily but my students were required to do everything using the matrix expression for the OLS estimator and also to report the results using Excel string functions.

To make my job easier, I partially or completely automate time-consuming operations. Arguably, everything can be completely automated using Power Automate promoted by Microsoft. Except for the macro, my home-made solutions are simpler.

### Detailed explanations

How to make Gmail your mailto protocol handler

Video 1. Initial file

Video 2. Creating Excel individual files

Video 3. Scheduling emails

Video 4. How to quickly check students work

Macro for creating files

Sub CreateDataFiles()
'
' This needs a file with student names (column A), block sizes (column C)
' and data to choose data blocks from (columns F through M). All on sheet "block finec"
' It creates files with student names and individual data blocks
' If necessary, change edit whatever you want
' Also can change the range address. R1C5 - upper left corner of the data
' "R" & Size & "C13" - lower right corner of the data
' Size is read off column C

' First select the cells with block sizes and then run the macro

' Files will be created and saved with student names
' Keyboard Shortcut: Ctrl+i
'
Application.ScreenUpdating = False
For Each cell In Selection.Cells

Size = cell.Value
Name = cell.Offset(0, -2).Value

Application.Goto Reference:="R1C5:R" & Size & "C13"
Application.CutCopyMode = False
Selection.Copy
ActiveSheet.Paste
ChDir "C:\Users\Student files"
ActiveWorkbook.SaveAs Filename:= _
"C:\Users\Student files\" & Name & ".xlsx", _
FileFormat:=xlOpenXMLWorkbook, CreateBackup:=False
ActiveWorkbook.Close

Workbooks("Stat 2 Spring 2022 list with emails.xlsm").Activate

Next
End Sub

19
Feb 22

## Distribution of the estimator of the error variance

If you are reading the book by Dougherty: this post is about the distribution of the estimator  $s^2$ defined in Chapter 3.

Consider regression

(1) $y=X\beta +e$

where the deterministic matrix $X$ is of size $n\times k,$ satisfies $\det \left( X^{T}X\right) \neq 0$ (regressors are not collinear) and the error $e$ satisfies

(2) $Ee=0,Var(e)=\sigma ^{2}I$

$\beta$ is estimated by $\hat{\beta}=(X^{T}X)^{-1}X^{T}y.$ Denote $P=X(X^{T}X)^{-1}X^{T},$ $Q=I-P.$ Using (1) we see that $\hat{\beta}=\beta +(X^{T}X)^{-1}X^{T}e$ and the residual $r\equiv y-X\hat{\beta}=Qe.$ $\sigma^{2}$ is estimated by

(3) $s^{2}=\left\Vert r\right\Vert ^{2}/\left( n-k\right) =\left\Vert Qe\right\Vert ^{2}/\left( n-k\right) .$

$Q$ is a projector and has properties which are derived from those of $P$

(4) $Q^{T}=Q,$ $Q^{2}=Q.$

If $\lambda$ is an eigenvalue of $Q,$ then multiplying $Qx=\lambda x$ by $Q$ and using the fact that $x\neq 0$ we get $\lambda ^{2}=\lambda .$ Hence eigenvalues of $Q$ can be only $0$ or $1.$ The equation $tr\left( Q\right) =n-k$
tells us that the number of eigenvalues equal to 1 is $n-k$ and the remaining $k$ are zeros. Let $Q=U\Lambda U^{T}$ be the diagonal representation of $Q.$ Here $U$ is an orthogonal matrix,

(5) $U^{T}U=I,$

and $\Lambda$ is a diagonal matrix with eigenvalues of $Q$ on the main diagonal. We can assume that the first $n-k$ numbers on the diagonal of $Q$ are ones and the others are zeros.

Theorem. Let $e$ be normal. 1) $s^{2}\left( n-k\right) /\sigma ^{2}$ is distributed as $\chi _{n-k}^{2}.$ 2) The estimators $\hat{\beta}$ and $s^{2}$ are independent.

Proof. 1) We have by (4)

(6) $\left\Vert Qe\right\Vert ^{2}=\left( Qe\right) ^{T}Qe=\left( Q^{T}Qe\right) ^{T}e=\left( Qe\right) ^{T}e=\left( U\Lambda U^{T}e\right) ^{T}e=\left( \Lambda U^{T}e\right) ^{T}U^{T}e.$

Denote $S=U^{T}e.$ From (2) and (5)

$ES=0,$ $Var\left( S\right) =EU^{T}ee^{T}U=\sigma ^{2}U^{T}U=\sigma ^{2}I$

and $S$ is normal as a linear transformation of a normal vector. It follows that $S=\sigma z$ where $z$ is a standard normal vector with independent standard normal coordinates $z_{1},...,z_{n}.$ Hence, (6) implies

(7) $\left\Vert Qe\right\Vert ^{2}=\sigma ^{2}\left( \Lambda z\right) ^{T}z=\sigma ^{2}\left( z_{1}^{2}+...+z_{n-k}^{2}\right) =\sigma ^{2}\chi _{n-k}^{2}.$

(3) and (7) prove the first statement.

2) First we note that the vectors $Pe,Qe$ are independent. Since they are normal, their independence follows from

$cov(Pe,Qe)=EPee^{T}Q^{T}=\sigma ^{2}PQ=0.$

It's easy to see that $X^{T}P=X^{T}.$ This allows us to show that $\hat{\beta}$ is a function of $Pe$:

$\hat{\beta}=\beta +(X^{T}X)^{-1}X^{T}e=\beta +(X^{T}X)^{-1}X^{T}Pe.$

Independence of $Pe,Qe$ leads to independence of their functions $\hat{\beta}$ and $s^{2}.$

22
Dec 18

## Application: distribution of sigma squared estimator

For the formulation of multiple regression and classical conditions on its elements see Application: estimating sigma squared. There we proved unbiasedness of the OLS estimator of $\sigma^2.$ Here we do more: we characterize its distribution and obtain unbiasedness as a corollary.

### Preliminaries

We need a summary of what we know about the residual $r=y-\hat{y}$ and the projector $Q=I-P$ where $P=X^T(X^TX)^{-1}X^T:$

(1) $\Vert r\Vert^2=e^TQe.$

$P$ has $k$ unities and $n-k$ zeros on the diagonal of its diagonal representation, where $k$ is the number of regressors. With $Q$ it's the opposite: it has $n-k$ unities and $k$ zeros on the diagonal of its diagonal representation. We can always assume that the unities come first, so in the diagonal representation

(2) $Q=UDU^{-1}$

the matrix $U$ is orthogonal and $D$ can be written as

(3) $D=\left(\begin{array}{cc}I_{n-k}&0\\0&0\end{array}\right)$

where $I_{n-k}$ is an identity matrix and the zeros are zero matrices of compatible dimensions.

### Characterization of the distribution of $s^2$$s^2$

Exercise 1. Suppose the error vector $e$ is normal: $e\sim N(0,\sigma^2I).$ Prove that the vector $\delta =U^{-1}e/\sigma$ is standard normal.

Proof. By the properties of orthogonal matrices

$Var(\delta)=E\delta\delta^T=U^{-1}Eee^TU/\sigma^2=U^{-1}U=I.$

This, together with the equation $E\delta =0$, proves that $\delta$ is standard normal.

Exercise 2. Prove that $\Vert r\Vert^2/\sigma^2$ is distributed as $\chi _{n-k}^2.$

Proof. From (1) and (2) we have

$\Vert r\Vert^2/\sigma^2=e^TUDU^{-1}e/\sigma^2=(U^{-1}e)^TD(U^{-1}e)/\sigma^2=\delta^TD\delta.$

Now (3) shows that $\Vert r\Vert^2/\sigma^2=\sum_{i=1}^{n-k}\delta_i^2$ which is the definition of $\chi _{n-k}^2.$

Exercise 3. Find the mean and variance of $s^2=\Vert r\Vert^2/(n-k)=\sigma^2\chi _{n-k}^2/(n-k).$

Solution. From Exercise 2 we obtain the result proved earlier in a different way:

$Es^2=\sigma^2E\chi _{n-k}^2/(n-k)=\sigma^2.$

Further, using the variance of a standard normal

$Var(s^2)=\frac{\sigma^4}{(n-k)^2}\sum_{i=1}^{n-k}Var(\delta_i^2)=\frac{2\sigma^4}{n-k}.$

18
Nov 18

## Application: Ordinary Least Squares estimator

### Generalized Pythagoras theorem

Exercise 1. Let $P$ be a projector and denote $Q=I-P.$ Then $\Vert x\Vert^2=\Vert Px\Vert^2+\Vert Qx\Vert^2.$

Proof. By the scalar product properties

$\Vert x\Vert^2=\Vert Px+Qx\Vert^2=\Vert Px\Vert^2+2(Px)\cdot (Qx)+\Vert Qx\Vert^2.$

$P$ is symmetric and idempotent, so

$(Px)\cdot (Qx)=(Px)\cdot[(I-P)x]=x\cdot[(P-P^2)x]=0.$

This proves the statement.

### Ordinary Least Squares (OLS) estimator derivation

Problem statement. A vector $y\in R^n$ (the dependent vector) and vectors $x^{(1)},...,x^{(k)}\in R^n$ (independent vectors or regressors) are given. The OLS estimator is defined as that vector $\beta \in R^k$ which minimizes the total sum of squares $TSS=\sum_{i=1}^n(y_i-x^{(1)}\beta_1-...-x^{(k)}\beta_k)^2.$

Denoting $X=(x^{(1)},...,x^{(k)}),$ we see that $TSS=\Vert y-X\beta\Vert^2$ and that finding the OLS estimator means approximating $y$ with vectors from the image $\text{Img}X.$ $x^{(1)},...,x^{(k)}$ should be linearly independent, otherwise the solution will not be unique.

Assumption. $x^{(1)},...,x^{(k)}$ are linearly independent. This, in particular, implies that $k\leq n.$

Exercise 2. Show that the OLS estimator is

(2) $\hat{\beta}=(X^TX)^{-1}X^Ty.$

Proof. By Exercise 1 we can use $P=X(X^TX)^{-1}X^T.$ Since $X\beta$ belongs to the image of $P,$ $P$ doesn't change it: $X\beta=PX\beta.$ Denoting also $Q=I-P$ we have

$\Vert y-X\beta\Vert^2=\Vert y-Py+Py-X\beta\Vert^2$

$=\Vert Qy+P(y-X\beta)\Vert^2$ (by Exercise 1)

$=\Vert Qy\Vert^2+\Vert P(y-X\beta)\Vert^2.$

This shows that $\Vert Qy\Vert^2$ is a lower bound for $\Vert y-X\beta\Vert^2.$ This lower bound is achieved when the second term is made zero. From

$P(y-X\beta)=Py-X\beta =X(X^TX)^{-1}X^Ty-X\beta=X[(X^TX)^{-1}X^Ty-\beta]$

we see that the second term is zero if $\beta$ satisfies (2).

Usually the above derivation is applied to the dependent vector of the form $y=X\beta+e$ where $e$ is a random vector with mean zero. But it holds without this assumption. See also simplified derivation of the OLS estimator.

6
Oct 17

## Significance level and power of test

In this post we discuss several interrelated concepts: null and alternative hypotheses, type I and type II errors and their probabilities. Review the definitions of a sample space and elementary events and that of a conditional probability.

## Type I and Type II errors

Regarding the true state of nature we assume two mutually exclusive possibilities: the null hypothesis (like the suspect is guilty) and alternative hypothesis (the suspect is innocent). It's up to us what to call the null and what to call the alternative. However, the statistical procedures are not symmetric: it's easier to measure the probability of rejecting the null when it is true than other involved probabilities. This is why what is desirable to prove is usually designated as the alternative.

Usually in books you can see the following table.

 Decision taken Fail to reject null Reject null State of nature Null is true Correct decision Type I error Null is false Type II error Correct decision

This table is not good enough because there is no link to probabilities. The next video does fill in the blanks.

Video. Significance level and power of test

## Significance level and power of test

The conclusion from the video is that

$\frac{P(T\bigcap R)}{P(T)}=P(R|T)=P\text{(Type I error)=significance level}$ $\frac{P(F\bigcap R)}{P(F)}=P(R|F)=P\text{(Correctly rejecting false null)=Power}$
11
Aug 17

## Violations of classical assumptions

This will be a simple post explaining the common observation that "in Economics, variability of many variables is proportional to those variables". Make sure to review the assumptions; they tend to slip from memory. We consider the simple regression

(1) $y_i=a+bx_i+e_i.$

One of classical assumptions is

Homoscedasticity. All errors have the same variances$Var(e_i)=\sigma^2$ for all $i$.

We discuss its opposite, which is

Heteroscedasticity. Not all errors have the same variance. It would be wrong to write it as $Var(e_i)\ne\sigma^2$ for all $i$ (which means that all errors have variance different from $\sigma^2$). You can write that not all $Var(e_i)$ are the same but it's better to use the verbal definition.

Remark about Video 1. The dashed lines can represent mean consumption. Then the fact that variation of a variable grows with its level becomes more obvious.

Video 1. Case for heteroscedasticity

Figure 1. Illustration from Dougherty: as x increases, variance of the error term increases

Homoscedasticity was used in the derivation of the OLS estimator variance; under heteroscedasticity that expression is no longer valid. There are other implications, which will be discussed later.

Companies example. The Samsung Galaxy Note 7 battery fires and explosions that caused two recalls cost the smartphone maker at least $5 billion. There is no way a small company could have such losses. GDP example. The error in measuring US GDP is on the order of$200 bln, which is comparable to the Kazakhstan GDP. However, the standard deviation of the ratio error/GDP seems to be about the same across countries, if the underground economy is not too big. Often the assumption that the standard deviation of the regression error is proportional to one of regressors is plausible.

To see if the regression error is heteroscedastic, you can look at the graph of the residuals or use statistical tests.

7
Aug 17

## Violations of classical assumptions

This is a large topic which requires several posts or several book chapters. During a conference in Sweden in 2010, a Swedish statistician asked me: "What is Econometrics, anyway? What tools does it use?" I said: "Among others, it uses linear regression." He said: "But linear regression is a general statistical tool, why do they say it's a part of Econometrics?" My answer was: "Yes, it's a general tool but the name Econometrics emphasizes that the motivation for its applications lies in Economics".

Both classical assumptions and their violations should be studied with this point in mind: What is the Economics and Math behind each assumption?

## Violations of the first three assumptions

We consider the simple regression

(1) $y_i=a+bx_i+e_i$

Make sure to review the assumptions. Their numbering and names sometimes are different from what Dougherty's book has. In particular, most of the time I omit the following assumption:

A6. The model is linear in parameters and correctly specified.

When it is not linear in parameters, you can think of nonlinear alternatives. Instead of saying "correctly specified" I say "true model" when a "wrong model" is available.

A1. What if the existence condition is violated? If variance of the regressor is zero, the OLS estimator does not exist. The fitted line is supposed to be vertical, and you can regress $x$ on $y$. Violation of the existence condition in case of multiple regression leads to multicollinearity, and that's where economic considerations are important.

A2. The convenience condition is called so because when it is violated, that is, the regressor is stochastic, there are ways to deal with this problem:  finite-sample theory and large-sample theory.

A3. What if the errors in (1) have means different from zero? This question can be divided in two: 1) the means of the errors are the same: $Ee_i=c\ne 0$ for all $i$ and 2) the means are different. Read the post about centering and see if you can come up with the answer for the first question. The means may be different because of omission of a relevant variable (can you do the math?). In the absence of data on such a variable, there is nothing you can do.

Violations of A4 and A5 will be treated later.

26
Jul 17

## Nonlinear least squares

Here we explain the idea, illustrate the possible problems in Mathematica and, finally, show the implementation in Stata.

## Idea: minimize RSS, as in ordinary least squares

Observations come in pairs $(x_1,y_1),...,(x_n,y_n)$. In case of ordinary least squares, we approximated the y's with linear functions of the parameters, possibly nonlinear in x's. Now we use a function $f(a,b,x_i)$ which may be nonlinear in $a,b$. We still minimize RSS which takes the form $RSS=\sum r_i^2=\sum(y_i-f(a,b,x_i))^2$. Nonlinear least squares estimators are the values $a,b$ that minimize RSS. In general, it is difficult to find the formula (closed-form solution), so in practice software, such as Stata, is used for RSS minimization.

## Simplified idea and problems in one-dimensional case

Suppose we want to minimize $f(x)$. The Newton algorithm (default in Stata) is an iterative procedure that consists of steps:

1. Select the initial value $x_0$.
2. Find the derivative (or tangent) of RSS at $x_0$. Make a small step in the descent direction (indicated by the derivative), to obtain the next value $x_1$.
3. Repeat Step 2, using $x_1$ as the starting point, until the difference between the values of the objective function at two successive points becomes small. The last point $x_n$ will approximate the minimizing point.

Problems:

1. The minimizing point may not exist.
2. When it exists, it may not be unique. In general, there is no way to find out how many local minimums there are and which ones are global.
3. The minimizing point depends on the initial point.

See Video 1 for illustration in the one-dimensional case.

Video 1. NLS geometry

## Problems illustrated in Mathematica

Here we look at three examples of nonlinear functions, two of which are considered in Dougherty. The first one is a power functions (it can be linearized applying logs) and the second is an exponential function (it cannot be linearized). The third function gives rise to two minimums. The possibilities are illustrated in Mathematica.

Video 2. NLS illustrated in Mathematica

## Finally, implementation in Stata

Here we show how to 1) generate a random vector, 2) create a vector of initial values, and 3) program a nonlinear dependence.

Video 3. NLS implemented in Stata

10
Jul 17

## Alternatives to simple regression in Stata

In this post we looked at dependence of EARNINGS on S (years of schooling). In the end I suggested to think about possible variations of the model. Specifically, could the dependence be nonlinear? We consider two answers to this question.

This name is used for the quadratic dependence of the dependent variable on the independent variable. For our variables the dependence is

$EARNINGS=a+bS+cS^2+u$.

Note that the dependence on S is quadratic but the right-hand side is linear in the parameters, so we still are in the realm of linear regression. Video 1 shows how to run this regression.

Video 1. Running quadratic regression in Stata

## Nonparametric regression

The general way to write this model is

$y=m(x)+u.$

The beauty and power of nonparametric regression consists in the fact that we don't need to specify the functional form of dependence of $y$ on $x$. Therefore there are no parameters to interpret, there is only the fitted curve. There is also the estimated equation of the nonlinear dependence, which is too complex to consider here. I already illustrated the difference between parametric and nonparametric regression. See in Video 2 how to run nonparametric regression in Stata.

Video 2. Nonparametric dependence

6
Jul 17

## Running simple regression in Stata

Running simple regression in Stata is, well, simple. It's just a matter of a couple of clicks. Try to make it a small research.

1. Obtain descriptive statistics for your data (Statistics > Summaries, tables, and tests > Summary and descriptive statistics > Summary statistics). Look at all that stuff you studied in introductory statistics: units of measurement, means, minimums, maximums, and correlations. Knowing the units of measurement will be important for interpreting regression results; correlations will predict signs of coefficients, etc. In your report, don't just mechanically repeat all those measures; try to find and discuss something interesting.
2. Visualize your data (Graphics > Twoway graph). On the graph you can observe outliers and discern possible nonlinearity.
3. After running regression, report the estimated equation. It is called a fitted line and in our case looks like this: Earnings = -13.93+2.45*S (use descriptive names and not abstract X,Y). To see if the coefficient of S is significant, look at its p-value, which is smaller than 0.001. This tells us that at all levels of significance larger than or equal to 0.001 the null that the coefficient of S is significant is rejected. This follows from the definition of p-value. Nobody cares about significance of the intercept. Report also the p-value of the F statistic. It characterizes significance of all nontrivial regressors and is important in case of multiple regression. The last statistic to report is R squared.
4. Think about possible variations of the model. Could the dependence of Earnings on S be nonlinear? What other determinants of Earnings would you suggest from among the variables in Dougherty's file?

Figure 1. Looking at data. For data, we use a scatterplot.

Figure 2. Running regression (Statistics > Linear models and related > Linear regression)