Applied econometrics: Econ 508
logo

Applied Econometrics 
Econ 508 - Fall 2014

Professor: Roger Koenker 

TA: Nicolas Bottan 

Welcome to e-Tutorial, your on-line help to Econ508. The present issue focuses on the basic operations of R. The introductory material presented below is designed to enhance your understanding of the topics and your performance on the homework. This issue focuses on the basic features of Box-Cox transformations and Partial Residual Plots.1

Introduction

In problem set 1, question 1, you are asked to estimate two demand equations for bread using the data set available here (or if you prefer, visit the data set collection at the Econ 508 web page, under the name “giffen”). You can save the data using the techniques suggested on e-Tutorial 1. As a general guideline, I suggest you to script your work, and if you are bold enough I encourage you to use Sweave or Knit.

Parts (i)-(iii) of the problem set involve simple linear regression and hypothesis testing that should be straightforward to solve once you are familiar with R. For every hypothesis testing, please make clear what are the null and alternative hypotheses. Please also provide a simple table with the main estimation results. I recommend you to ALWAYS include standard deviations for ALL parameters you estimate. Remember the first rule of empirical paper writing: “All good estimates deserve a standard error”. Another useful advice is to summarize your main conclusions. It is strongly encouraged that you structure your problem set as a paper. Finally, graphs are very welcome as long as you provide labels and refer to them on your comments. Don’t include any remaining material (e.g., software output or your preliminary regressions) in your report.

Partial Residual Plot

Question 1, part (iv) requires you to compare the plots of the Engel curves for bread in the “short” and “long” versions of the model using partial residual plot for the latter model. As mentioned in Professor Koenker’s Lecture 2, “the partial residual plot is a device for representing the final step of a multivariate regression result as a bivariate scatterplot.” Here is how you do that:

Theorem: (Gauss-Frisch-Waugh)

Recall the results of the Gauss-Frisch-Waugh theorem in Professor Koenker’s Lecture Note 2 (pages 8-9). Here you will see that you can obtain the same coefficient and standard deviation for a given covariate by using partial residual regression. I will show the result using the gasoline demand data available here. In this data set, Y corresponds to the log per capita gas consumption, P is the log gas price, and Z is the log per capita income. The variables are already in the logarithmic form, so that we are actually estimating log-linear models.

At first you run the full model and observe the coefficient and standard deviation of P.

    gas<-read.table("http://www.econ.uiuc.edu/~econ508/data/gasnew.txt",header=T)
    head(gas)
  year  ln.p. ln.z.  ln.y.
1 1947 -1.869 1.773 -1.495
2 1947 -1.824 1.745 -1.499
3 1948 -1.794 1.752 -1.493
4 1948 -1.752 1.738 -1.517
5 1948 -1.695 1.749 -1.538
6 1948 -1.680 1.755 -1.532
    y<-gas$ln.y.
    p<-gas$ln.p.
    z<-gas$ln.z.

     
    #Full Model
    FM<-lm(y~p+z)
    summary(FM)

Call:
lm(formula = y ~ p + z)

Residuals:
     Min       1Q   Median       3Q      Max 
-0.21774 -0.04252  0.00604  0.06176  0.17049 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -5.2665     0.1122   -47.0   <2e-16 ***
p            -0.4075     0.0196   -20.8   <2e-16 ***
z             1.7593     0.0419    41.9   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.0794 on 198 degrees of freedom
Multiple R-squared:  0.944, Adjusted R-squared:  0.944 
F-statistic: 1.67e+03 on 2 and 198 DF,  p-value: <2e-16

Then you run a shorter version of the model, excluding P (My). After that you run another short model, but in this case you regress the omitted variable P on the same covariates of the previous model (Mx).

    #My
    My<-lm(y~z)
    summary(My)

Call:
lm(formula = y ~ z)

Residuals:
    Min      1Q  Median      3Q     Max 
-0.4315 -0.0396  0.0250  0.0650  0.2841 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -3.1064     0.0742   -41.9   <2e-16 ***
z             0.9736     0.0320    30.4   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.141 on 199 degrees of freedom
Multiple R-squared:  0.823, Adjusted R-squared:  0.822 
F-statistic:  924 on 1 and 199 DF,  p-value: <2e-16
    #Mx
    Mx<-lm(p~z)
    summary(Mx)

Call:
lm(formula = p ~ z)

Residuals:
    Min      1Q  Median      3Q     Max 
-0.6250 -0.1762  0.0604  0.1633  0.6107 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   -5.301      0.151   -35.2   <2e-16 ***
z              1.928      0.065    29.6   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.286 on 199 degrees of freedom
Multiple R-squared:  0.815, Adjusted R-squared:  0.814 
F-statistic:  879 on 1 and 199 DF,  p-value: <2e-16

Finally, you regress the residuals of the model My on the residuals of the model Mx,

    pr<-lm(resid(My)~resid(Mx))
    summary(pr)

Call:
lm(formula = resid(My) ~ resid(Mx))

Residuals:
     Min       1Q   Median       3Q      Max 
-0.21774 -0.04252  0.00604  0.06176  0.17049 

Coefficients:
             Estimate Std. Error t value Pr(>|t|)    
(Intercept) -1.55e-17   5.58e-03     0.0        1    
resid(Mx)   -4.08e-01   1.96e-02   -20.8   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.0792 on 199 degrees of freedom
Multiple R-squared:  0.685, Adjusted R-squared:  0.683 
F-statistic:  433 on 1 and 199 DF,  p-value: <2e-16

Next we can plot those residuals and insert a fitting line:

    plot(resid(Mx),resid(My),main="Partial Residuals",xlab="Gasoline Price",ylab="Per Capita Gas Consumption")
    abline(pr)
    abline(v=0,lty=2)
    abline(h=0,lty=2)

plot of chunk unnamed-chunk-4

Box-Cox Transformation:

For question 2, parts (a)-(d) are also straightforward. You are expected to calculate the estimates in both linear and log-linear form. Besides them, you are expected to run a Box-Cox version of the model, and interpret it. Here I will give you some help by using the same gasoline demand data as above.

Just for a minute, suppose somebody told you that a nice gasoline demand equation should also include two additional covariates: the squared price of gas, and the effect of price times income. You can obtain those variables as follows:

    p2<-p^2
    pz<-p*z

Next you are asked to run this extended model, in a traditional log-linear form (remember that all covariates are already in logs). So, the easiest way to do that is as follows:

    summary(lm(y~p+z+p2+pz))

Call:
lm(formula = y ~ p + z + p2 + pz)

Residuals:
     Min       1Q   Median       3Q      Max 
-0.18545 -0.03945  0.00715  0.03976  0.13905 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -7.3915     0.2032  -36.37  < 2e-16 ***
p            -2.5737     0.1858  -13.85  < 2e-16 ***
z             2.5712     0.0776   33.12  < 2e-16 ***
p2           -0.2713     0.0375   -7.23  1.1e-11 ***
pz            0.7170     0.0593   12.09  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.06 on 196 degrees of freedom
Multiple R-squared:  0.968, Adjusted R-squared:  0.968 
F-statistic: 1.5e+03 on 4 and 196 DF,  p-value: <2e-16

The log-linear form seems to be a nice attempt to estimate the gasoline demand.
Next, suppose you are so confident on this model that you write a paper about the topic and send it to a journal. Two weeks later you receive a letter from a referee saying she is suspicious about your log-linear equation. She asks you to reestimate the same model but using the dependent variable linearly (i.e., without the logs), and the rest of the equation remaining as before. She asks you to revise and resubmit the paper with your new findings.

In the search for elements that support your original model, you start the following experiment: 1. Run the model suggested by the referee, using a Box-Cox transformation to find the MLE of \(\lambda\), 2. Plot the concentrated log-likelihood function, and 3. Reestimate the model conditional on the MLE of \(\lambda\):

    library(MASS)
    y<-exp(y)
    g<-boxcox(y~p+z+p2+pz, lambda =c(-1, -0.5,-0.25,0,0.25,0.5,1))

plot of chunk unnamed-chunk-7

Note first that we called the MASS package which contains the Box-Cox transformation function for linear models. The boxcox function plots values of \(\lambda\) against the log-likelihood of the resulting model. Our objective is to maximize the log-likelihood, and so the function draws a line at the optimum. As you see, the MLE for \(\lambda\) is very close to zero. The picture is drawn using smoothing spline techniques to help you envisage the log-likelihood function and the MLE of \(\lambda\). The horizontal line corresponds to the 95% confidence interval.

To find out the value of \(\lambda\) that maximizes the log-likelihood we use the which.max function:

    lambda <- g$x[which.max(g$y)]
    lambda
[1] -0.05051

As you can see it is indeed very close to zero. Next we can apply the power transform to y and then fit the revised model:

    y <- ((y^lambda)-1)/lambda
    summary(lm(y~p+z+p2+pz))

Call:
lm(formula = y ~ p + z + p2 + pz)

Residuals:
     Min       1Q   Median       3Q      Max 
-0.19315 -0.04221  0.00817  0.04113  0.14718 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  -7.6351     0.2123  -35.97  < 2e-16 ***
p            -2.6252     0.1941  -13.53  < 2e-16 ***
z             2.6593     0.0811   32.79  < 2e-16 ***
p2           -0.2802     0.0392   -7.15  1.7e-11 ***
pz            0.7254     0.0620   11.71  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.0627 on 196 degrees of freedom
Multiple R-squared:  0.969, Adjusted R-squared:  0.968 
F-statistic: 1.51e+03 on 4 and 196 DF,  p-value: <2e-16

The latter regression (conditional on the MLE of \(\lambda\)) provides results close to your log-linear suggestion. Now you have reasonable support to write back the referee and defend your original model.

Andrews Test

Finally, for the Econ 508 problem set 1, question 2, you are also required to perform the David Andrews (1971) test. As an example, I use the same gasoline data as above, and follow the routine on Professor Koenker’s Lecture Note 2:

  1. Run the linear model and get the predicted values of y (call this variable \(\hat y\)):
    gas<-read.table("http://www.econ.uiuc.edu/~econ508/data/gasnew.txt",header=T)
    y<-exp(gas$ln.y.)
    p<-exp(gas$ln.p.)
    z <-exp(gas$ln.z.)
    h<-lm(y~p+z)
    summary(h)

Call:
lm(formula = y ~ p + z)

Residuals:
     Min       1Q   Median       3Q      Max 
-0.10102 -0.02560  0.00444  0.02495  0.08223 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept) -0.15367    0.01123   -13.7   <2e-16 ***
p           -0.33759    0.01450   -23.3   <2e-16 ***
z            0.07411    0.00163    45.5   <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.0339 on 198 degrees of freedom
Multiple R-squared:  0.942, Adjusted R-squared:  0.941 
F-statistic: 1.61e+03 on 2 and 198 DF,  p-value: <2e-16
    yhat<-fitted(h)
  1. Reestimate the augmented model and test \(\gamma = 0\):
    Ly<-log(yhat)
    yLy<-yhat*Ly
    v<-lm(y~p+z+yLy)
    summary(v)

Call:
lm(formula = y ~ p + z + yLy)

Residuals:
     Min       1Q   Median       3Q      Max 
-0.10245 -0.01838  0.00565  0.01779  0.06370 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)  0.28451    0.04606    6.18  3.7e-09 ***
p           -0.29584    0.01270  -23.29  < 2e-16 ***
z            0.06516    0.00163   40.01  < 2e-16 ***
yLy          1.08263    0.11147    9.71  < 2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.028 on 197 degrees of freedom
Multiple R-squared:  0.961, Adjusted R-squared:  0.96 
F-statistic: 1.61e+03 on 3 and 197 DF,  p-value: <2e-16
    anova(h,v)
Analysis of Variance Table

Model 1: y ~ p + z
Model 2: y ~ p + z + yLy
  Res.Df   RSS Df Sum of Sq    F Pr(>F)    
1    198 0.228                             
2    197 0.154  1    0.0738 94.3 <2e-16 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

From the test above we can reject the null hypothesis that \(\gamma =0\). Can you interpret what does this mean?


  1. Please send comments to bottan2@illinois.edu or srmntbr2@illinois.edu