FWL Theorem

Frisch-Waugh-Lovell theorem

Estimating a Subset of Regression Coefficients

1. Model Setup

Consider the linear regression model:

[ \begin{aligned} Y &= X B + U
&= [X_1 \quad X_2] \begin{bmatrix} \beta_1 \ \beta_2 \end{bmatrix} + U
&= X_1 \beta_1 + X_2 \beta_2 + U \end{aligned} ]

where:

Our objective is to estimate only ( \beta_2 ).


2. Idea: Partialling Out ( X_1 )

To isolate the effect of ( X_2 ), we remove (project out) the variation explained by ( X_1 ).

Let:

[ M_{X_1} = I - X_1 (X_1’X_1)^{-1} X_1’ ]

be the orthogonal projection matrix onto the orthogonal complement of the column space of ( X_1 ).

Key property:

[ M_{X_1} X_1 = 0 ]


3. Projecting the Model

Pre-multiply the model by ( M_{X_1} ):

[ \begin{aligned} M_{X_1} Y &= M_{X_1} X_1 \beta_1

Thus,

[ M_{X_1} Y = M_{X_1} X_2 \beta_2 + M_{X_1} U ]

After projection, the effect of ( X_1 ) disappears.


4. Estimation of ( \beta_2 )

Now run OLS of ( M_{X_1} Y ) on ( M_{X_1} X_2 ):

[ \hat{\beta}2 = \left[ (M{X_1} X_2)’ (M_{X_1} X_2) \right]^{-1} (M_{X_1} X_2)’ M_{X_1} Y ]

Using properties of projection matrices:

So,

[ \begin{aligned} \hat{\beta}2 &= \left[ X_2’ M{X_1}’ M_{X_1} X_2 \right]^{-1} X_2’ M_{X_1}’ M_{X_1} Y
&= \left[ X_2’ M_{X_1} X_2 \right]^{-1} X_2’ M_{X_1} Y \end{aligned} ]


5. Interpretation

This result shows that:

This is the Frisch–Waugh–Lovell (FWL) Theorem.


6. Key Takeaway

To estimate a subset of coefficients:

[ \boxed{ \hat{\beta}2 = (X_2’ M{X_1} X_2)^{-1} X_2’ M_{X_1} Y } ]

You can obtain ( \hat{\beta}_2 ) by partialling out the effect of ( X_1 ) before running OLS.

References