Suppose in a box we have coins and banknotes of only two denominations: $1 and $5 (see Figure 1).
Figure 1. Illustration of two variables
We pull one out randomly. The division of cash by type (coin or banknote) divides the sample space (shown as a square, lower left picture) with probabilities and (they sum to one). The division by denomination ($1 or $5) divides the same sample space differently, see the lower right picture, with the probabilities to pull out $1 and $5 equal to and , resp. (they also sum to one). This is summarized in the tables
Variable 1: Cash type
Prob
coin
banknote
Variable 2: Denomination
Prob
$1
$5
Now we can consider joint events and probabilities (see Figure 2, where the two divisions are combined).
Figure 2. Joint probabilities
For example, if we pull out a random it can be a and $1 and the corresponding probability is The two divisions of the sample space generate a new division into four parts. Then geometrically it is obvious that we have four identities:
Adding over denominations:
Adding over cash types:
Formally, here we use additivity of probability for disjoint events
In words: we can recover own probabilities of variables 1,2 from joint probabilities.
Generalization
Suppose we have two discrete random variables taking values and resp., and their own probabilities are Denote the joint probabilities Then we have the identities
(1) ( equations).
In words: to obtain the marginal probability of one variable (say, ) sum over the values of the other variable (in this case, ).
The name marginal probabilities is used for because in the two-dimensional table they arise as a result of summing table entries along columns or rows and are displayed in the margins.
Analogs for continuous variables with densities
Suppose we have two continuous random variables and their own densities are and Denote the joint density . Then replacing in (1) sums by integrals and probabilities by densities we get
(2)
In words: to obtain one marginal density (say, ) integrate out the other variable (in this case, ).
Its content, organization and level justify its adoption as a textbook for introductory statistics for Econometrics in most American or European universities. The book's table of contents is somewhat standard, the innovation comes in a presentation that is crisp, concise, precise and directly relevant to the Econometrics course that will follow. I think instructors and students will appreciate the absence of unnecessary verbiage that permeates many existing textbooks.
Having read Professor Mynbaev's previous books and research articles I was not surprised with his clear writing and precision. However, I was surprised with an informal and almost conversational one-on-one style of writing which should please most students. The informality belies a careful presentation where great care has been taken to present the material in a pedagogical manner.
Carlos Martins-Filho Professor of Economics University of Colorado at Boulder Boulder, USA
Last semester I tried to explain theory through numerical examples. The results were terrible. Even the best students didn't stand up to my expectations. The midterm grades were so low that I did something I had never done before: I allowed my students to write an analysis of the midterm at home. Those who were able to verbally articulate the answers to me received a bonus that allowed them to pass the semester.
This semester I made a U-turn. I announced that in the first half of the semester we will concentrate on theory and we followed this methodology. Out of 35 students, 20 significantly improved their performance and 15 remained where they were.
a. Define the density of a random variable Draw the density of heights of adults, making simplifying assumptions if necessary. Don't forget to label the axes.
b. According to your plot, how much is the integral Explain.
c. Why the density cannot be negative?
d. Why the total area under the density curve should be 1?
e. Where are basketball players on your graph? Write down the corresponding expression for probability.
f. Where are dwarfs on your graph? Write down the corresponding expression for probability.
This question is about the interval formula. In each case students have to write the equation for the probability and the corresponding integral of the density. At this level, I don't talk about the distribution function and introduce the density by the interval formula.
a. Derive linearity of covariance in the first argument when the second is fixed.
b. How much is covariance if one of its arguments is a constant?
c. What is the link between variance and covariance? If you know one of these functions, can you find the other (there should be two answers)? (4 points)
a. Define the density of a random variable Draw the density of work experience of adults, making simplifying assumptions if necessary. Don't forget to label the axes.
b. According to your plot, how much is the integral Explain.
c. Why the density cannot be negative?
d. Why the total area under the density curve should be 1?
e. Where are retired people on your graph? Write down the corresponding expression for probability.
f. Where are young people (up to 25 years old) on your graph? Write down the corresponding expression for probability.
Recently I enjoyed reading Jack Weatherford's "Genghis Khan and the Making of the Modern World" (2004). I was reading the book with a specific question in mind: what were the main reasons of the success of the Mongols? Here you can see the list of their innovations, some of which were in fact adapted from the nations they subjugated. But what was the main driving force behind those innovations? The conclusion I came to is that Genghis Khan was a genial psychologist. He used what he knew about individual and social psychology to constantly improve the government of his empire.
I am no Genghis Khan but I try to base my teaching methods on my knowledge of student psychology.
Problems and suggested solutions
Steven Krantz in his book (How to teach mathematics : Second edition, 1998, don't remember the page) says something like this: If you want your students to do something, arrange your classes so that they do it in the class.
Problem 1. Students mechanically write down what the teacher says and writes.
Solution. I don't allow my students to write while I am explaining the material. When I explain, their task is to listen and try to understand. I invite them to ask questions and prompt me to write more explanations and comments. After they all say "We understand", I clean the board and then they write down whatever they understood and remembered.
Problem 2. Students are not used to analyze what they read or write.
Solution. After students finish their writing, I ask them to exchange notebooks and check each other's writings. It's easier for them to do this while everything is fresh in their memory. I bought and distributed red pens. When they see that something is missing or wrong, they have to write in red. Errors or omissions must stand out. Thus, right there in the class students repeat the material twice.
Problem 3. Students don't study at home.
Solution. I let my students know in advance what the next quiz will be about. Even with this knowledge, most of them don't prepare at home. Before the quiz I give them about half an hour to repeat and discuss the material (this is at least the third repetition). We start the quiz when they say they are ready.
Problem 4. Students don't understand that active repetition (writing without looking at one's notes) is much more productive than passive repetition (just reading the notes).
Solution. Each time before discussion sessions I distribute scratch paper and urge students to write, not just read or talk. About half of them follow my recommendation. Their desire to keep their notebooks neat is not their last consideration. The solution to Problem 1 also hinges upon active repetition.
Problem 5. If students work and are evaluated individually, usually there is no or little interaction between them.
Problem 6. Some students don't want to work in teams. They are usually either good students, who don't want to suffer because of weak team members, or weak students, who don't want their low grades to harm other team members.
Solution. The good students usually argue that it's not fair if their grade becomes lower because of somebody else's fault. My answer to them is that the meaning of fairness depends on the definition. In my grading scheme, 30 points out of 100 is allocated for team work and the rest for individual achievements. Therefore I never allow good students to work individually. I want them to be my teaching assistants and help other students. While doing so, I tell them that I may reward good students with a bonus in the end of the semester. In some cases I allow weak students to write quizzes individually but only if the team so requests. The request of the weak student doesn't matter. The weak student still has to participate in team discussions.
Problem 7. There is no accumulation of theoretical knowledge (flat learning curve).
Solution. a) Most students come from high school with little experience in algebra. I raise the level gradually and emphasize understanding. Students never see multiple choice questions in my classes. They also know that right answers without explanations will be discarded.
b) Normally, during my explanations I fill out the board. The amount of the information the students have to remember is substantial and increases over time. If you know a better way to develop one's internal vision, let me know.
c) I don't believe in learning the theory by doing applied exercises. After explaining the theory I formulate it as a series of theoretical exercises. I give the theory in large, logically consistent blocks for students to see the system. Half of exam questions are theoretical (students have to provide proofs and derivations) and the other half - applied.
d) The right motivation can be of two types: theoretical or applied, and I never substitute one for another.
Problem 8. In low-level courses you need to conduct frequent evaluations to keep your students in working shape. Multiply that by the number of students, and you get a serious teaching overload.
Solution. Once at a teaching conference in Prague my colleague from New York boasted that he grades 160 papers per week. Evaluating one paper per team saves you from that hell.
Outcome
In the beginning of the academic year I had 47 students. In the second semester 12 students dropped the course entirely or enrolled in Stats classes taught by other teachers. Based on current grades, I expect 15 more students to fail. Thus, after the first year I'll have about 20 students in my course (if they don't fail other courses). These students will master statistics at the level of my book.
Reevaluating probabilities based on piece of evidence
This actually has to do with the Bayes' theorem. However, in simple problems one can use a dead simple approach: just find probabilities of all elementary events. This post builds upon the post on Significance level and power of test, including the notation. Be sure to review that post.
Here is an example from the guide for Quantitative Finance by A. Patton (University of London course code FN3142).
Activity 7.2 Consider a test that has a Type I error rate of 5%, and power of 50%.
Suppose that, before running the test, the researcher thinks that both the null and the alternative are equally likely.
If the test indicates a rejection of the null hypothesis, what is the probability that the null is false?
If the test indicates a failure to reject the null hypothesis, what is the probability that the null is true?
Denote events R = {Reject null}, A = {fAil to reject null}; T = {null is True}; F = {null is False}. Then we are given:
(1)
(2)
(1) and (2) show that we can find and and therefore also and Once we know probabilities of elementary events, we can find everything about everything.
Figure 1. Elementary events
Answering the first question: just plug probabilities in
Answering the second question: just plug probabilities in
Patton uses the Bayes' theorem and the law of total probability. The solution suggested above uses only additivity of probability.
In this post we discuss several interrelated concepts: null and alternative hypotheses, type I and type II errors and their probabilities. Review the definitions of a sample space and elementary events and that of a conditional probability.
Type I and Type II errors
Regarding the true state of nature we assume two mutually exclusive possibilities: the null hypothesis (like the suspect is guilty) and alternative hypothesis (the suspect is innocent). It's up to us what to call the null and what to call the alternative. However, the statistical procedures are not symmetric: it's easier to measure the probability of rejecting the null when it is true than other involved probabilities. This is why what is desirable to prove is usually designated as the alternative.
Usually in books you can see the following table.
Decision taken
Fail to reject null
Reject null
State of nature
Null is true
Correct decision
Type I error
Null is false
Type II error
Correct decision
This table is not good enough because there is no link to probabilities. The next video does fill in the blanks.
Here we assume that a point moves along a straight line and try to find its speed as usual. We divide the distance traveled by the time it takes to travel it. The ratio is an average speed over a time interval. As we reduce the length of the time interval, we get a better and better approximation to the exact (instantaneous) speed at a point in time.
Video 1. Derivative is speed
Position of point as a function of time
Working with the visualization of the point movement on a straight line is inconvenient because it is difficult to correlate the point position to time. It is much better to visualize the movement on the space-time plane where the horizontal axis is for time and the vertical axis is for the point position.
Video 2. Position of point as function of time
Measuring the slope of a straight line
A little digression: how do you measure the slope of a straight line, if you know the values of the function at different points?
Video 3. Measuring the slope of a straight line
Derivative as the slope of a tangent line
This is like putting two and two together: we apply the previous definition to the slope of a secant drawn through two points on a graph. Then it remains to notice that the secant approaches the tangent line, as the second point approaches the first.
Video 4. Derivative as tangent slope
From function to its derivative
This is a very useful exercise that allows later to come up with the optimization conditions, called first order and second order conditions.
Video 5. From function to its derivative
Conclusion
Let be some function and fix an initial point . The derivative is defined as the limit
When describes the movement of a point along a straight line, the derivative gives the speed of that point. When is drawn on a plane, the derivative gives the slope of the tangent line to the graph.
This will be a simple post explaining the common observation that "in Economics, variability of many variables is proportional to those variables". Make sure to review the assumptions; they tend to slip from memory. We consider the simple regression
(1)
One of classical assumptions is
Homoscedasticity. All errors have the same variances: for all .
We discuss its opposite, which is
Heteroscedasticity. Not all errors have the same variance. It would be wrong to write it as for all (which means that all errors have variance different from ). You can write that not all are the same but it's better to use the verbal definition.
Remark about Video 1. The dashed lines can represent mean consumption. Then the fact that variation of a variable grows with its level becomes more obvious.
Video 1. Case for heteroscedasticity
Figure 1. Illustration from Dougherty: as x increases, variance of the error term increases
Homoscedasticity was used in the derivation of the OLS estimator variance; under heteroscedasticity that expression is no longer valid. There are other implications, which will be discussed later.
Companies example. The Samsung Galaxy Note 7 battery fires and explosions that caused two recalls cost the smartphone maker at least $5 billion. There is no way a small company could have such losses.
GDP example. The error in measuring US GDP is on the order of $200 bln, which is comparable to the Kazakhstan GDP. However, the standard deviation of the ratio error/GDP seems to be about the same across countries, if the underground economy is not too big. Often the assumption that the standard deviation of the regression error is proportional to one of regressors is plausible.
To see if the regression error is heteroscedastic, you can look at the graph of the residuals or use statistical tests.
This is a large topic which requires several posts or several book chapters. During a conference in Sweden in 2010, a Swedish statistician asked me: "What is Econometrics, anyway? What tools does it use?" I said: "Among others, it uses linear regression." He said: "But linear regression is a general statistical tool, why do they say it's a part of Econometrics?" My answer was: "Yes, it's a general tool but the name Econometrics emphasizes that the motivation for its applications lies in Economics".
Both classical assumptions and their violations should be studied with this point in mind: What is the Economics and Math behind each assumption?
Violations of the first three assumptions
We consider the simple regression
(1)
Make sure to review the assumptions. Their numbering and names sometimes are different from what Dougherty's book has. In particular, most of the time I omit the following assumption:
A6. The model is linear in parameters and correctly specified.
When it is not linear in parameters, you can think of nonlinear alternatives. Instead of saying "correctly specified" I say "true model" when a "wrong model" is available.
A1. What if the existence condition is violated? If variance of the regressor is zero, the OLS estimator does not exist. The fitted line is supposed to be vertical, and you can regress on . Violation of the existence condition in case of multiple regression leads to multicollinearity, and that's where economic considerations are important.
A2. The convenience condition is called so because when it is violated, that is, the regressor is stochastic, there are ways to deal with this problem: finite-sample theory and large-sample theory.
A3. What if the errors in (1) have means different from zero? This question can be divided in two: 1) the means of the errors are the same: for all and 2) the means are different. Read the post about centering and see if you can come up with the answer for the first question. The means may be different because of omission of a relevant variable (can you do the math?). In the absence of data on such a variable, there is nothing you can do.
You must be logged in to post a comment.