Coefficient of determination: an inductive introduction to R squared
I know a person, who did not understand this topic, even though he had a PhD in Math. That was me more than twenty years ago, and the reason was that the topic was given formally, without explaining the leading idea.
Step 1. We want to describe the relationship between observed y's and x's using the simple regression
Let us start with the simple case when there is no variability in y's, that is the slope and the errors are zero. Since for all , we have and, of course,
In the general case, we start with the decomposition
where is the fitted value and is the residual, see this post. We still want to see how is far from With this purpose, from both sides of equation (2) we subtract obtaining Squaring this equation, for the sum in (1) we get
Whoever was the first to do this, discovered that the cross product is zero and (3) simplifies to
The rest is a matter of definitions
Total Sum of Squares (I prefer to call this a total variation around )
Explained Sum of Squares (to me this is explained variation around )
Residual Sum of Squares (unexplained variation around , caused by the error term)
Thus from (4) we have
Step 2. It is desirable to have RSS close to zero and ESS close to TSS. Therefore we can use the ratio ESS/TSS as a measure of how good the regression describes the relationship between y's and x's. From (5) it follows that this ratio takes values between zero and 1. Hence, the coefficient of determination
can be interpreted as the percentage of total variation of y's around explained by regression. From (5) an equivalent definition is
Back to the pearls of AP Statistics
How much of the above can be explained without algebra? Stats without algebra is a crippled creature. I am afraid, any concepts requiring substantial algebra should be dropped from AP Stats curriculum. Compare this post with the explanation on p. 592 of Agresti and Franklin.