In just six months the number of reads of my book went up by 2000
For more information see my site
In my book I explained how one can use Excel to do statistical simulations and replace statistical tables commonly used in statistics courses. Here I go one step further by providing a free statistical calculator that replaces the following tables from the book by Newbold et al.:
Table 1 Cumulative Distribution Function, F(z), of the Standard Normal Distribution Table
Table 2 Probability Function of the Binomial Distribution
Table 5 Individual Poisson Probabilities
Table 7a Upper Critical Values of ChiSquare Distribution with Degrees of Freedom
Table 8 Upper Critical Values of Student’s t Distribution with Degrees of Freedom
Tables 9a, 9b Upper Critical Values of the F Distribution
The calculator is just a Google sheet with statistical functions, see Picture 1:
Last semester I tried to explain theory through numerical examples. The results were terrible. Even the best students didn't stand up to my expectations. The midterm grades were so low that I did something I had never done before: I allowed my students to write an analysis of the midterm at home. Those who were able to verbally articulate the answers to me received a bonus that allowed them to pass the semester.
This semester I made a Uturn. I announced that in the first half of the semester we will concentrate on theory and we followed this methodology. Out of 35 students, 20 significantly improved their performance and 15 remained where they were.
a. Define the density of a random variable Draw the density of heights of adults, making simplifying assumptions if necessary. Don't forget to label the axes.
b. According to your plot, how much is the integral Explain.
c. Why the density cannot be negative?
d. Why the total area under the density curve should be 1?
e. Where are basketball players on your graph? Write down the corresponding expression for probability.
f. Where are dwarfs on your graph? Write down the corresponding expression for probability.
This question is about the interval formula. In each case students have to write the equation for the probability and the corresponding integral of the density. At this level, I don't talk about the distribution function and introduce the density by the interval formula.
a. Define a discrete random variable and its mean.
b. Define linear operations with random variables.
c. Prove linearity of means.
d. Prove additivity and homogeneity of means.
e. How much is the mean of a constant?
f. Using induction, derive the linearity of means for the case of variables from the case of two variables (3 points).
a. Derive linearity of covariance in the first argument when the second is fixed.
b. How much is covariance if one of its arguments is a constant?
c. What is the link between variance and covariance? If you know one of these functions, can you find the other (there should be two answers)? (4 points)
a. Define the density of a standard normal.
b. Why is the function even? Illustrate this fact on the plot.
c. Why is the function odd? Illustrate this fact on the plot.
d. Justify the equation
e. Why is
f. Let Show on the same plot areas corresponding to the probabilities Write down the relationships between
a. Define a general normal variable
b. Use this definition to find the mean and variance of
c. Using part b, on the same plot graph the density of the standard normal and of a general normal with parameters
a. Define the density of a random variable Draw the density of work experience of adults, making simplifying assumptions if necessary. Don't forget to label the axes.
b. According to your plot, how much is the integral Explain.
c. Why the density cannot be negative?
d. Why the total area under the density curve should be 1?
e. Where are retired people on your graph? Write down the corresponding expression for probability.
f. Where are young people (up to 25 years old) on your graph? Write down the corresponding expression for probability.
a. Define variance of a random variable. Why is it nonnegative?
b. Define the formula for variance of a linear combination of two variables.
c. How much is variance of a constant?
d. What is the formula for variance of a sum? What do we call homogeneity of variance?
e. What is larger: or ? (2 points)
f. One investor has 100 shares of Apple, another  200 shares. Which investor's portfolio has larger variability? (2 points)
a. Write down the Taylor expansion and explain the idea. How are the Taylor coefficients found?
b. Use the Taylor series for the exponential function to define the Poisson distribution.
c. Find the mean of the Poisson distribution. What is the interpretation of the parameter in practice?
a. Define the density of a standard normal.
b. Why is the function even? Illustrate this fact on the plot.
c. Why is the function odd? Illustrate this fact on the plot.
d. Justify the equation
e. Why is
f. Let Show on the same plot areas corresponding to the probabilities Write down the relationships between
a. Define a general normal variable
b. Use this definition to find the mean and variance of
c. Using part b, on the same plot graph the density of the standard normal and of a general normal with parameters
This year I am teaching AP Statistics. If the things continue the way they are, about half of the class will fail. Here is my diagnosis and how I am handling the problem.
On the surface, the students lack algebra training but I think the problem is deeper: many of them have underdeveloped cognitive abilities. Their perception is slow, memory is limited, analytical abilities are rudimentary and they are not used to work at home. Limited resources require careful allocation.
Short and intuitive names are better than twoword professional names.
Instead of "sample space" or "probability space" say "universe". The universe is the widest possible event, and nothing exists outside it.
Instead of "elementary event" say "atom". Simplest possible events are called atoms. This corresponds to the theoretical notion of an atom in measure theory (an atom is a measurable set which has positive measure and contains no set of smaller positive measure).
Then the formulation of classical probability becomes short. Let denote the number of atoms in the universe and let be the number of atoms in event If all atoms are equally likely (have equal probabilities), then
The clumsy "mutually exclusive events" are better replaced by more visual "disjoint sets". Likewise, instead of "collectively exhaustive events" say "events that cover the universe".
The combination "mutually exclusive" and "collectively exhaustive" events is beyond comprehension for many. I say: if events are disjoint and cover the universe, we call them tiles. To support this definition, play onscreen one of jigsaw puzzles (Video 1) and produce the picture from Figure 1.
Video 1. Tiles (disjoint events that cover the universe)
We are in the same boat. I mean the big boat. Not the class. Not the university. It's the whole country. We depend on each other. Failure of one may jeopardize the wellbeing of everybody else.
You work in teams. You help each other to learn. My lectures and your presentations are just the beginning of the journey of knowledge into your heads. I cannot control how it settles there. Be my teaching assistants, share your big and little discoveries with your classmates.
I don't just preach about you helping each other. I force you to work in teams. 30% of the final grade is allocated to team work. Team work means joint responsibility. You work on assignments together. I randomly select a team member for reporting. His or her grade is what each team member gets.
This kind of team work is incompatible with the Western obsession with grades privacy. If I say my grade is nobody's business, by extension I consider the level of my knowledge a private issue. This will prevent me from asking for help and admitting my errors. The situation when students hide their errors and weaknesses from others also goes against the ethics of many workplaces. In my class all grades are public knowledge.
In some situations, keeping the grade private is technically impossible. Conducting a competition without announcing the points won is impossible. If I catch a student cheating, I announce the failing grade immediately, as a warning to others.
To those of you who think teambased learning is unfair to better students I repeat: 30% of the final grade is given for team work, not for personal achievements. The other 70% is where you can shine personally.
Team work serves several purposes.
Firstly, joint responsibility helps breaking communication barriers. See in Video 2 my students working in teams on classroom assignments. The situation when a weaker student is too proud to ask for help and a stronger student doesn't want to offend by offering help is not acceptable. One can ask for help or offer help without losing respect for each other.
Video 2. Teams working on assignments
Secondly, it turns on resources that are otherwise idle. Explaining something to somebody is the best way to improve your own understanding. The better students master a kind of leadership that is especially valuable in a modern society. For the weaker students, feeling responsible for a team improves motivation.
Thirdly, I save time by having to grade less student papers.
On exams and quizzes I mercilessly punish the students for Yes/No answers without explanations. There are no halfpoints for halfunderstanding. This, in combination with the team work and open grades policy allows me to achieve my main objective: students are eager to talk to me about their problems.
After studying the basics of set operations and probabilities we had a midterm exam. It revealed that about onethird of students didn't understand this material and some of that misunderstanding came from high school. During the review session I wanted to see if they were ready for a frank discussion and told them: "Those who don't understand probabilities, please raise your hands", and about onethird raised their hands. I invited two of them to work at the board.
Video 3. Translating verbal statements to sets, with accompanying probabilities
Many teachers think that the Venn diagrams explain everything about sets because they are visual. No, for some students they are not visual enough. That's why I prepared a simple teaching aid (see Video 3) and explained the task to the two students as follows:
I am shooting at the target. The target is a square with two circles on it, one red and the other blue. The target is the universe (the bullet cannot hit points outside it). The probability of a set is its area. I am going to tell you one statement after another. You write that statement in the first column of the table. In the second column write the mathematical expression for the set. In the third column write the probability of that set, together with any accompanying formulas that you can come up with. The formulas should reflect the relationships between relevant areas.
Table 1. Set operations and probabilities
Statement  Set  Probability 
1. The bullet hit the universe  
2. The bullet didn't hit the universe  
3. The bullet hit the red circle  
4. The bullet didn't hit the red circle  
5. The bullet hit both the red and blue circles  (in general, this is not equal to )  
6. The bullet hit or (or both) 
(additivity rule) 

7. The bullet hit but not  
8. The bullet hit but not  
9. The bullet hit either or (but not both) 
During the process, I was illustrating everything on my teaching aid. This exercise allows the students to relate verbal statements to sets and further to their areas. The main point is that people need to see the logic, and that logic should be repeated several times through similar exercises.
This actually has to do with the Bayes' theorem. However, in simple problems one can use a dead simple approach: just find probabilities of all elementary events. This post builds upon the post on Significance level and power of test, including the notation. Be sure to review that post.
Here is an example from the guide for Quantitative Finance by A. Patton (University of London course code FN3142).
Activity 7.2 Consider a test that has a Type I error rate of 5%, and power of 50%.
Suppose that, before running the test, the researcher thinks that both the null and the alternative are equally likely.
If the test indicates a failure to reject the null hypothesis, what is the probability that the null is true?
Denote events R = {Reject null}, A = {fAil to reject null}; T = {null is True}; F = {null is False}. Then we are given:
(1)
(2)
(1) and (2) show that we can find and and therefore also and Once we know probabilities of elementary events, we can find everything about everything.
Answering the first question: just plug probabilities in
Answering the second question: just plug probabilities in
Patton uses the Bayes' theorem and the law of total probability. The solution suggested above uses only additivity of probability.
In this post we discuss several interrelated concepts: null and alternative hypotheses, type I and type II errors and their probabilities. Review the definitions of a sample space and elementary events and that of a conditional probability.
Regarding the true state of nature we assume two mutually exclusive possibilities: the null hypothesis (like the suspect is guilty) and alternative hypothesis (the suspect is innocent). It's up to us what to call the null and what to call the alternative. However, the statistical procedures are not symmetric: it's easier to measure the probability of rejecting the null when it is true than other involved probabilities. This is why what is desirable to prove is usually designated as the alternative.
Usually in books you can see the following table.
Decision taken  
Fail to reject null  Reject null  
State of nature  Null is true  Correct decision  Type I error 
Null is false  Type II error  Correct decision 
This table is not good enough because there is no link to probabilities. The next video does fill in the blanks.
The conclusion from the video is that
In this post we looked at dependence of EARNINGS on S (years of schooling). In the end I suggested to think about possible variations of the model. Specifically, could the dependence be nonlinear? We consider two answers to this question.
This name is used for the quadratic dependence of the dependent variable on the independent variable. For our variables the dependence is
.
Note that the dependence on S is quadratic but the righthand side is linear in the parameters, so we still are in the realm of linear regression. Video 1 shows how to run this regression.
The general way to write this model is
The beauty and power of nonparametric regression consists in the fact that we don't need to specify the functional form of dependence of on . Therefore there are no parameters to interpret, there is only the fitted curve. There is also the estimated equation of the nonlinear dependence, which is too complex to consider here. I already illustrated the difference between parametric and nonparametric regression. See in Video 2 how to run nonparametric regression in Stata.
Running simple regression in Stata is, well, simple. It's just a matter of a couple of clicks. Try to make it a small research.
Introduction to Stata: Stata interface, how to use Stata Help, how to use Data Editor and how to graph data. Important details to remember:
See details in videos. Sorry about the background noise!
I am reading "5 Steps to a 5 AP Statistics, 20102011 Edition" by Duane Hinders (sorry, I don't have the latest edition). The tip at the bottom of p.200 says:
For the exam, be VERY, VERY clear on the discussion above. Many students
seem to think that we can attach a probability to our interpretation of a confidence
interval. We cannot.
This is one of those misconceptions that travel from book to book. Below I show how it may have arisen.
The intuition behind the confidence interval and the confidence interval derivation using z score have been given here. To make the discussion close to Duane Hinders, I show the confidence interval derivation using the t statistic. Let be a sample of independent observations from a normal population, the population mean and the standard error. Skipping the intuition, let's go directly to the t statistic
(1) .
At the 95% confidence level, from statistical tables find the critical value of the t statistic such that
Plug here (1) to get
(2)
Using equivalent transformations of inequalities (multiplying them by and adding to all sides) we rewrite (2) as
(3)
Thus, we have proved
Statement 1. The interval contains the values of the sample mean with probability 95%.
The leftside inequality in (3) is equivalent to and the rightside one is equivalent to . Combining these two inequalities, we see that (3) can be equivalently written as
(4)
So, we have
Statement 2. The interval contains the population mean with probability 95%.
In (3), the variable in the middle () is random, and the statement that it belongs to some interval is naturally probabilistic. People not familiar with the above derivation don't understand how a statement that the population mean (which is a constant) belongs to some interval can be probabilistic. It's the interval ends that are random in (4) (the sample mean and standard error are both random), that's why there is probability! Statements 1 and 2 are equivalent!
My colleague Aidan Islyami mentioned that we should distinguish estimates from estimators.
In all statistical derivations random variables are exante (before the event). No book says that but that's the way it is. An estimate is an expost (after the event) value of an estimator. An estimate is, of course, a number and not a random variable. Exante, a confidence interval always has a probability. Expost, the fact that an estimate belongs to some interval is deterministic (has probability either 0 or 1) and it doesn't make sense to talk about 95%.