Chapter 18 Econometric Notation
I emphasize concepts over notation. For example, I prefer to say (and write) “slope” rather than \(m\). I prefer to think of the SD as the “r.m.s. of the deviations from the average” rather than \(\sqrt{\frac{ \sum \left ( X - \overline{X} \right) ^2}{n}}\).
Unfortunate, we eventually need to represent ideas with symbols so that we can precisely present and reason about our ideas.
With that in mind, most political scientists represent the regression model with “econometric notation.”
The textbook by FPP offers a nice, intuitive discussion of regression and other statistical tools. Unfortunately, their discussion deviates from the usual notation-heavy presentation in political science. The standard presentation in political science borrows heavily from econometrics. We’re going to dive in to the notation a bit in these notes.
18.1 Simple Regression Model
FPP represent the regression model as \(y = mx + b\), where we understand \(y\) as the “predicted value” or “average value” of \(y\) rather than the outcome variable itself.
The standard econometric notation represents the same idea slightly differently, as \(y = \beta_0 + \beta_1x + u\). There are a few differences:
- In econometric notation, \(y\) represents the observed outcome variable, not the predicted or average outcome as FPP’s notation.
- The intercept is denoted as \(\beta_0\) rather than \(b\) and is typically written first. Similarly, the slope is denoted as \(\beta_1\) rather than \(m\).
- The term \(u\) is new. It represents the error–the vertical distance between the regression line and the points. This allows \(y\) to represent the observed outcome rather than the predicted or average outcome. \(u\) is referred to as the “error,” “error term,” or “disturbance.”
The observed variables \(y\) and \(x\) have different names depending on the author and discipline.
\(y\) | \(x\) |
---|---|
Dependent Variable | Independent Variable |
Outcome Variable | Explanatory Variable |
Response | Predictor |
Regressand | Regressor |
Feature | |
Covariate | |
Economists draw a sharp distinction between the actual regression model \(y = \beta_0 + \beta_1x + u\) and the estimated regression model \(\hat{y} = \hat{\beta}_0 + \hat{\beta}_1x\). This distinction is important for statistical theory, because it allows methodologists to evaluate approaches to estimating \(\beta_0\) and \(\beta_1\). The idea is that there is some true model—the actual relationship between \(x\) and \(y\)—but the researcher can only find estimates of \(\beta_0\) and \(\beta_1\). The estimate is distinguished from the true value by adding a “hat,” so that the estimate of the true value \(\beta_0\) is denoted as \(\hat{\beta}_1\).
Similarly, \(u\) is the distance between the observed values of \(y\) and the true regression line \(\beta_0 + \beta_1x\). \(\hat{u}\) represents the distance between observed values of \(y\) and the estimated regression line \(\hat{\beta}_0 + \hat{\beta}_1x\).
In order to justify the least squares estimator (i.e., using the line that minimizes the r.m.s. error), econometricians make several assumptions about the model \(y = \beta_0 + \beta_1x + u\). However, in my view, these assumptions are not particularly important because least squares tends to work well even when the assumptions are not met. Also, we don’t yet have the probability theory we need to describe these assumptions.
18.2 Multiple Regression Model
We can add variables by simply expanding the econometric notation to two variables, so that \(y = \beta_0 + \beta_1x_1 + \beta_2x_2 + u\). Note that we have two explanatory variables now, so we attach subscripts 1 and 2 to distinguish them.
Similarly, we could expand the model to three explanatory variables, so that \(y = \beta_0 + \beta_1x_1 + \beta_2x_2 + \beta_3x_3 + u\). In general, we can have \(k\) explanatory variables in the model, so that \(y = \beta_0 + \beta_1x_1 + \beta_2x_2 + ... + \beta_kx_k + u\).
The interpretation of the slope coefficients \(\beta_1\), \(\beta_2\), …, and \(\beta_k\) remains similar to our interpretation of \(m\). As \(x_i\) increases by one unit, the average value of \(y\) increases by \(\beta_i\) units, where \(y\) equals 1, 2, …, or \(k\). We need to add to this interepretation though, and specify “holding all other explanatory variables in the model fixed.”
This last bit is useful. If we include the confounds in the regression model, then we can “control” for them and draw causal inferences. However, this depends on strong assumptions. However, this is a common approach to causal inference in political science.
See below for the least squares estimate of the regression model \(y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + u\). (In this case, the variables are fictional.) Notice that in this case, we are fitting a plane, not a line. You can spin the plot around to better visualized the plane.