- Published on
Notes on Regression - OLS
- Authors
- Name
This post is the first in a series of my study notes on regression techniques. I first learnt about regression as a way of fitting a line through a series of points. Invoke some assumptions and one obtains the relationship between two variables. Simple...or so I thought. Through the course of my study, I developed a deeper appreciation of its nuances which I hope to elucidate in these set of notes.
Aside: The advancements in regression analysis, since it was introduced by Gauss in the early 19th century, is an interesting case study of the development of applied mathematics. The method remains roughly the same, but advances in other related fields (linear algebra, statistics) and applied econometrics helped clarify the assumptions used and elevate its status in modern applied research.
In this review, I shall focus on the ordinary linear regression (OLS) and omit treatment of its many descendants.1 Let's start at the source and cover regression as a solution to the least squares minimisation problem, before going to deeper waters!
Preliminaries / Notation
Using matrix notation, let denote the number of observations and denote the number of regressors.
The vector of outcome variables is a matrix,
The matrix of regressors is a matrix (or each row is a vector),
The vector of error terms is also a matrix.
At times it might be easier to use vector notation. For consistency I will use the bold small x to denote a vector and capital letters to denote a matrix. Single observations are denoted by the subscript.
Least Squares
Start:
Assumptions:
- Linearity (given above)
- (conditional independence)
- rank() = (no multi-collinearity i.e. full rank)
- (Homoskedascity)
Aim:
Find that minimises sum of squared errors:
Solution:
Hints: is a scalar, by symmetry .
Take matrix derivative w.r.t :
Notes:
- is a linear estimator i.e. it can be written in the form where only depends on but not .
- Under assumptions 1-3, the estimator is unbiased. Substituting :
By law of iterated expectation
3. Adding in the homoskedascity assumption, the OLS estimator is the Best Linear Unbiased Estimator (BLUE) i.e. smallest variance among other linear and unbiased estimators or is p.s.d.
4. If the errors are normally distributed then conditional on , is also normally distributed.
Large Sample Properties
It is almost impossible for any real life data to satisfy the above assumptions, an exception is when and are jointly normal but that is a stretch to belief. To get around this issue, one can replace assumption 2 (conditional independence) with a weaker assumption: (weak exogeneity). Under this weaker assumption, the estimator is no longer unbiased.2 One must appeal to large sample theory to draw any meaningful results. More specifically, we use the idea of convergence in probability and weak law of large numbers to show that the estimator is consistent.3
Assumptions:
- Linearity
- (weak exogeneity)
- are i.i.d
- is p.s.d
- is p.s.d
Notes:
- is consistent since as .^[ is denoted with a subscript n to signify that it is a function of the sample size.]
- Large sample assumptions 3 and 4 are needed to establish convergence in probability: Use the fact that while to prove consistency.
- Large sample assumptions 1-7 are used to prove asymptotic normality of the estimator.
Footnotes
The popularity and limitations of the simple OLS regression has spawn many related techniques that are the subject of numerous research papers by themselves. ↩
Recall that unbiasedness requires conditional independence to hold but uncorrelatedness does not imply conditional independence. ↩
Similarly, the central limit theorem is used to establish convergence in distribution which is needed for statistical inference. ↩