Applied econometrics: Econ 508
logo

Applied Econometrics 
Econ 508 - Fall 2014

Professor: Roger Koenker 

TA: Nicolas Bottan 

Welcome to e-Tutorial, your on-line help to Econ508. The present issue focuses on the basic operations of R. The core material was extracted from previous e-TA’s, subsequent additions by Jiaying Gu and the book “A first course in statistical programming with R” by W. John Braun, Duncan J. Murdoch. available online through the university library. The usual disclaimers apply.1

What’s R and why use it?

R is a free, open-source, and object oriented language. Free and open-source means that anyone is free to use, redistribute and change the software in any way. Moreover, “R is”GNU S“, a freely available language and environment for statistical computing and graphics which provides a wide variety of statistical and graphical techniques” (http://cran.r-project.org)

There are lot of software out there to do data analysis that are prettier and seem easier than R, so why should I invest learning R. First, of all R is an investment, not a waste of time. There are three characteristics of R that make it worthwhile learning it. First of all, it is free. Many fancier looking software used today are quite expensive, but R is free and is going to be free allays. R also is a language, which means is that you don’t only get to use the functions that are build in the software but you can create your own (just to get an on the of the power of the R language you can take a look Professor Koenker’s Quantile Regression package). The last reason is that R is extremely well supported. If you have a question you just can google it, post it to StackOverflow or use R-blogger. If you are not convinced yet, just can type “why use the R language”" in Google and I think the results will speak by themselves.

Downloading and Installing R

You can obtain a free copy of R CRAN (Comprehensive R Archive Network) on the web, by clicking http://cran.r-project.org and choosing your appropriate operating system. R is also currently available at the Econometric Lab, 126 DKH, for students enrolled in the Econometrics field or other classes that require lab experiments. The website for the lab is http://www.econ.illinois.edu/~lab.

The R Interface

After downloading R you can work at least in two ways with R, by using the graphical interface or working in batch mode. Since this is an introductory material to R and you are reading it, it is very likely that you will be using a graphical interface, so we’ll center the e-TA’s around that. After you have mastered the art of scripting in R or if you are brave enough you can try running your scripts in R batch mode. An extremely brief set of instructions on how to run it can be found at the Econometrics lab website

When using R interactively, or also with scripts, you can use the graphical user interface (GUI). There are at least two options to work this way with R. The first option comes straight out of the standard R installation. You can get access to it by clicking in the R icon installed in your computer. The second option is to use R Studio, which is also free and open sourced. Everything covered in the e-TA’s can be done using either one of this GUI’s.

First steps in R

Having installed R the next step is learning the syntax of the language, this means learning the rules of it. After you open R GUI or R Studio you are going to see the R console, which displays the results of your analysis or any messages associated with your code that is entered in the command line (after the arrow “>”).

For example, we can use R as a calculator. You can type arithmetical expressions at the prompt (“>”):

    2 + 2
[1] 4

or

    log(1)
[1] 0

The [1] indicates that it is the first result from the command, and in this case the only one. You can also type something with multiple values for example a sequence of integers from 10 to 40:

    10:40
 [1] 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32
[24] 33 34 35 36 37 38 39 40

The first line starts with the first return value, so is labeled [1]; the second line starts with the 24th, so is labeled [24].

    log(1)
[1] 0

To quit you session just type

    q()

Scripting your work

Rather than saving the work space, it is highly recommended that you keep a record of the commands entered, so that we can reproduce it at a later date. The easiest way to do this is to enter commands in R’??s script editor, available from the File menu. Commands are executed by highlighting them and hitting Ctrl-R. At the end of a session, save the final script for a permanent record of your work. You can also use any text editor to do so. In R Studio the script editor opens next to the console and the mechanics is the same. Commands are executed by highlighting them and hitting Ctrl-Enter.

A script is a text file that contains lines of R code that can be saved and use over and over again. This is the preferred method to save your work and guarantee reproducibility. To know more on reproducible research you should read Professor Koenker’s Reproducibility in Econometrics Research webpage

A useful tip to keep in mind is that everything that is written after a # sign is assumed to be a comment and is ignored by R.

Assignment

R has a work space known as global enviroment where you can store your objects. For example, suppose we would like to store the calculation log(2) for future use. To do this type:

    x <- log(2)

Now x holds the result of such operation. To see this type

    x
[1] 0.6931

Now we can use x to do any operations. For example

    x+x
[1] 1.386
    x*x
[1] 0.4805

Vectors

You can also enter vectors. The c() function creates a vector. For example:

    weight <- c(65,45,67,78,56)

Creates a vector containing the numbers 65, 45, 67, 78 and 56, we can see the contented by typing

    weight
[1] 65 45 67 78 56

You can also check the length of the vector

   length(weight)
[1] 5

It is possible to do some arithmetic computations, for example multiply all elements by 3

    weight*3
[1] 195 135 201 234 168

or calculate a simple formula like

    height <- c(1.7,1.8,1.76,1.65,1.74)

    bmi <- weight/height^2

    bmi
[1] 22.49 13.89 21.63 28.65 18.50

First that we created a new vector that contains heights, and then calculated the body mass index. Note that the division is done member wise.

Matrices and Arrays

To arrange numbers in a matrix, we can use the matrix function

    x<-matrix(1:12,nrow=3, ncol=4)
    x
     [,1] [,2] [,3] [,4]
[1,]    1    4    7   10
[2,]    2    5    8   11
[3,]    3    6    9   12

or we can create a sequence of numbers and assign dimensions to it

    x <- 1:12
    x
 [1]  1  2  3  4  5  6  7  8  9 10 11 12
    dim(x) <- c(3,4)
    x
     [,1] [,2] [,3] [,4]
[1,]    1    4    7   10
[2,]    2    5    8   11
[3,]    3    6    9   12

We can assign names to the rows. For example, we assign the three first letters

    rownames(x) <- LETTERS[1:3]
    x
  [,1] [,2] [,3] [,4]
A    1    4    7   10
B    2    5    8   11
C    3    6    9   12

Another useful operations are:

Operator or Function Description
A * B Element-wise multiplication
A %*% B Matrix multiplication
A %o% B Outer product. AB’
t(A) Transpose
diag(x) Creates diagonal matrix with elements of x in the principal diagonal
solve(A, b) Returns vector x in the equation b = Ax (i.e., A-1b)
solve(A) Inverse of A where A is a square matrix.
cbind(A,B,...) Combine matrices(vectors) horizontally. Returns a matrix.
rbind(A,B,...) Combine matrices(vectors) vertically. Returns a matrix.
rowMeans(A) Returns vector of row means.
rowSums(A) Returns vector of row sums.
colMeans(A) Returns vector of column means.
colSums(A) Returns vector of column means.

(taken from Quick-R)

Indexing

Individual elements of an array can be referenced by the name of the array followed by the subscripts in square brackets, and separated by commas. For example:

    x<-matrix(1:12,nrow=3,ncol=4)
    x[,1]
[1] 1 2 3

refers to the first column of x.

    x[1,]
[1]  1  4  7 10

and refers to the first row. If we type

    x[,1:2]
     [,1] [,2]
[1,]    1    4
[2,]    2    5
[3,]    3    6

we get the first two columns of x. But if we type

    x[,c(2,4)]
     [,1] [,2]
[1,]    4   10
[2,]    5   11
[3,]    6   12

we obtain the second and forth column of x. We can also subset using another vector, for example:

    weight[height>1.7]
[1] 45 67 56

gets those elements in weight that have a corresponding element in height bigger than 1.7


Working in R

One way to learn R is to dive right in and work through a simple example.

Example - The U.S. Economy in the 1990s

Let’s start with an analysis of the performance of the U.S. economy during the 1990s. We have annual data on GDP growth, GDP per capita growth, private consumption growth, investment growth, manufacturing labor productivity growth, unemployment rate, and inflation rate. (The data is publicly available in the statistical appendixes of the World Economic Outlook, May 2001, IMF).

The first step is to tell R where is your working directory. This means telling R where are all the files related to your project. You should do this always at the beginning of your R session. You do so by using the setwd(path) function. Where path is the path to the folder where you want to write and read things. For example

    setwd("C:/Econ508/eTA/")

you should note that first that I’m using the forward slash. You could also use backslash but in that case you should use double backslash (\). Note that if you are using MAC you should omit “C:” This command line is telling R to write and read everything in the Econ508/eTA folder (that I assume you created before hand)

The next step is to download the data. Let’s explore two ways of doing so. The first one is the “traditional” way. Go to the web page containing the data, and save it. The data is available here. The other way to do it, is to use an R function:

    download.file("http://www.econ.uiuc.edu/~econ508/data/US90.txt", "US90.txt")

The first argument of the download.file function is the url where the file is located, whereas the second argument is the name where the downloaded file is saved. To know more about this function you can type in your console ?download.file, that will take you to the function’s help file.

Now, we need to load the .txt file to R. To do so we use the read.table function.

    US90<-read.table("US90.txt", sep="", header=TRUE)

What this function does is read the US90.txt file, names the data set as “US90” and tells R that the variables are separated by a blank space (sep="") and that the first column is the header. Obviously remembering all the arguments that a specific function can take is ludicrous, by doing ?read.table or help(read.table) you can see all the options that the function can take.

Now you have an object called data frame that contains your data, to check what class is an object you can type class(), i.e.

    class(US90) 
[1] "data.frame"

Data frames are just matrices that contains different types of data, not only numbers as we are used to. Since it is a matrix you can check it’s dimension by typing

    dim(US90)
[1] 11  8

Now you are ready to work with your data!!

Basic Operations

A first thing you can do is extract each variable from the data frame to single vectors. To make the individual analysis simpler. To do so you extract them from the data frame and give them respective names.

    year<-US90$year
    gdpgr<-US90$gdpgr
    consgr<-US90$consgr
    invgr<-US90$invgr
    unemp<-US90$unemp
    gdpcapgr<-US90$gdpcapgr
    inf<-US90$inf   
    producgr<-US90$producgr

Now we have created 8 objects, vectors each containing a variable. As an alternative you could attach() your data frame to the R search path. This will make objects within data frames easier to access. However, the attach function does not play nice with variables in the local work space with the same names. So it is advisable to avoid using it.

A useful way to explore your data is checking the main statistics of each variable.

    summary(US90)
      year          gdpgr          consgr         invgr      
 Min.   :1992   Min.   :1.50   Min.   :2.40   Min.   : 3.30  
 1st Qu.:1994   1st Qu.:2.70   1st Qu.:2.95   1st Qu.: 5.30  
 Median :1997   Median :3.60   Median :3.40   Median : 7.30  
 Mean   :1997   Mean   :3.46   Mean   :3.65   Mean   : 6.96  
 3rd Qu.:2000   3rd Qu.:4.30   3rd Qu.:4.25   3rd Qu.: 8.80  
 Max.   :2002   Max.   :5.00   Max.   :5.30   Max.   :10.70  
     unemp         gdpcapgr         inf          producgr   
 Min.   :4.00   Min.   :0.70   Min.   :1.50   Min.   :1.90  
 1st Qu.:4.45   1st Qu.:1.75   1st Qu.:2.25   1st Qu.:3.20  
 Median :5.00   Median :2.60   Median :2.60   Median :3.90  
 Mean   :5.33   Mean   :2.49   Mean   :2.59   Mean   :4.31  
 3rd Qu.:5.85   3rd Qu.:3.30   3rd Qu.:2.95   3rd Qu.:5.45  
 Max.   :7.50   Max.   :4.20   Max.   :3.40   Max.   :7.20  

Which gives you the minimum, 1st quartile, median, 3rd quartile, and maximum of each variable. If you also wish to know the standard deviation of a single variable, just include its name after the command

    summary(gdpgr)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   1.50    2.70    3.60    3.46    4.30    5.00 
    sd(gdpgr)
[1] 1.051

If you are in interested only in subset of your data, you can inspect it using filters. For example, begin by checking the dimension of the data matrix:

    dim(US90)
[1] 11  8

This means that your data matrix contains 11 rows (corresponding to the years 1992 to 2002) and 8 columns (corresponding to the variables). If you are only interested in a subset of the time periods (e.g., the years of the Clinton administration), you can select it as a new object:

    Clinton<-US90[2:9, ]

and then compute its main statistics:

    summary(Clinton)
      year          gdpgr          consgr         invgr      
 Min.   :1993   Min.   :2.70   Min.   :3.00   Min.   : 5.40  
 1st Qu.:1995   1st Qu.:3.38   1st Qu.:3.35   1st Qu.: 6.90  
 Median :1996   Median :4.10   Median :3.70   Median : 8.60  
 Mean   :1996   Mean   :3.88   Mean   :4.04   Mean   : 8.03  
 3rd Qu.:1998   3rd Qu.:4.40   3rd Qu.:4.85   3rd Qu.: 8.88  
 Max.   :2000   Max.   :5.00   Max.   :5.30   Max.   :10.70  
     unemp         gdpcapgr         inf          producgr   
 Min.   :4.00   Min.   :1.50   Min.   :1.50   Min.   :1.90  
 1st Qu.:4.42   1st Qu.:2.38   1st Qu.:2.27   1st Qu.:3.30  
 Median :5.20   Median :3.10   Median :2.70   Median :3.85  
 Mean   :5.21   Mean   :2.88   Mean   :2.59   Mean   :4.40  
 3rd Qu.:5.72   3rd Qu.:3.40   3rd Qu.:2.92   3rd Qu.:5.90  
 Max.   :6.90   Max.   :4.20   Max.   :3.40   Max.   :7.20  

If you are only interested in a subset of the variables (e.g., consumption and investment growth rates), you can select them by typing:

    VarSet1<-US90[ ,3:4]

and then compute its main statistics:

    summary(VarSet1)
     consgr         invgr      
 Min.   :2.40   Min.   : 3.30  
 1st Qu.:2.95   1st Qu.: 5.30  
 Median :3.40   Median : 7.30  
 Mean   :3.65   Mean   : 6.96  
 3rd Qu.:4.25   3rd Qu.: 8.80  
 Max.   :5.30   Max.   :10.70  

or in a much simpler way:

    summary(US90[,3:4])
     consgr         invgr      
 Min.   :2.40   Min.   : 3.30  
 1st Qu.:2.95   1st Qu.: 5.30  
 Median :3.40   Median : 7.30  
 Mean   :3.65   Mean   : 6.96  
 3rd Qu.:4.25   3rd Qu.: 8.80  
 Max.   :5.30   Max.   :10.70  

To create new variables, you can use traditional operators (+,-,*,/,^) and name new variables as follows:

  • add or subtract: lagyear<-year-1
  • multiply: newgdpgr<-gdpgr*100
  • divide: newunemp<-unemp/100
  • exponential: gdpcap2<-gdpcapgr^2
  • square root: sqrtcons<-sqrt(consgr)
  • natural logs: loginv<-log(invgr)
  • base 10 logs: log10inf<-log10(inf)
  • exponential: expprod<-exp(producgr)

Exploring Graphical Resources

Suppose now you want to check the relationship among variables. For example, suppose you would like to see how much GDP growth is related with GDP per capita growth. This corresponds to a single graph that could be obtained as follows:

    plot(gdpgr, gdpcapgr, pch="*")

plot of chunk unnamed-chunk-37

Another useful tool is the check on multiple graphs in a single window. For example, suppose you would like to expand your selection, and check the pair wise relationship of GDP, Consumption, and Investment Growth. You can obtain that as follows:

    pairs(US90 [, 2:4], pch="*")

plot of chunk unnamed-chunk-38

Suppose you would like to see the performance of multiple variables (e.g., GDP, GDP per capita, Consumption, and Investment growth rates) along time. The simplest way is as follows:

    par(mfrow=c(2,2))
    plot(year, gdpgr,    pch="*")
    plot(year, consgr,   pch="*")
    plot(year, gdpcapgr, pch="*")
    plot(year, invgr,    pch="*")

plot of chunk unnamed-chunk-39

Here the function par(mfrow=c(2,2)) creates a matrix with 2 rows and 2 columns in which the individual graphs will be stored, while plot is in charge of producing individual graphs for each selected variable.

You can easily expand the list of variables to obtain a graphical assessment of the performance of each of them along time. You can also use the graphs to assess cross-correlations (in a pair wise sense) among variables.

Linear Regression

Before running a regression, it is recommended you check the cross-correlations among covariates. You can do that graphically (see above) or using the following simple command:

    cor(US90)
             year    gdpgr  consgr    invgr   unemp gdpcapgr      inf
year      1.00000 -0.02869  0.1311 -0.03004 -0.8708   0.1064 -0.33598
gdpgr    -0.02869  1.00000  0.8394  0.90975 -0.3035   0.9890 -0.10121
consgr    0.13112  0.83937  1.0000  0.82695 -0.4761   0.8347 -0.11984
invgr    -0.03004  0.90975  0.8270  1.00000 -0.3684   0.8841 -0.30902
unemp    -0.87084 -0.30349 -0.4761 -0.36842  1.0000  -0.4143  0.35902
gdpcapgr  0.10642  0.98903  0.8347  0.88410 -0.4143   1.0000 -0.12296
inf      -0.33598 -0.10121 -0.1198 -0.30902  0.3590  -0.1230  1.00000
producgr  0.33167  0.57080  0.7050  0.52383 -0.5336   0.6003 -0.08322
         producgr
year      0.33167
gdpgr     0.57080
consgr    0.70499
invgr     0.52383
unemp    -0.53363
gdpcapgr  0.60028
inf      -0.08322
producgr  1.00000

From the matrix above you can see, for example, that GDP and GDP per capita growth rates are closely related, but each of them has a different degree of connection with unemployment rates (in fact, GDP per capita presents higher correlation with unemployment rates than total GDP). Inflation and unemployment present a reasonable degree of positive correlation (about 36%).

Now you start with simple linear regressions. For example, let’s check the regression of GDP versus investment growth rates. You just type:

    model1<-lm(gdpgr~invgr)
    summary(model1)

Call:
lm(formula = gdpgr ~ invgr)

Residuals:
   Min     1Q Median     3Q    Max 
-0.550 -0.351 -0.115  0.311  0.804 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)    
(Intercept)   0.7033     0.4422    1.59   0.1462    
invgr         0.3969     0.0604    6.57   0.0001 ***
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.46 on 9 degrees of freedom
Multiple R-squared:  0.828, Adjusted R-squared:  0.808 
F-statistic: 43.2 on 1 and 9 DF,  p-value: 0.000102

Please note that you don’t need to include the intercept, because R automatically includes it. In the output above you have the main regression diagnostics (F-test, adjusted R-squared, t-statistics, sample size, etc.). The same rule apply to multiple linear regressions. For example, suppose you want to find the main sources of GDP growth. The command is:

    model2<-lm(gdpgr~consgr+invgr+producgr+unemp+inf)
    summary(model2)

Call:
lm(formula = gdpgr ~ consgr + invgr + producgr + unemp + inf)

Residuals:
      1       2       3       4       5       6       7       8       9 
 0.0952 -0.3784  0.4079 -0.1680 -0.3338  0.4390 -0.2652 -0.1979  0.2856 
     10      11 
-0.4359  0.5515 

Coefficients:
            Estimate Std. Error t value Pr(>|t|)  
(Intercept)  -0.8866     1.4929   -0.59     0.58  
consgr        0.1822     0.3605    0.51     0.63  
invgr         0.3449     0.1338    2.58     0.05 *
producgr      0.0490     0.1547    0.32     0.76  
unemp         0.0552     0.1898    0.29     0.78  
inf           0.3020     0.3726    0.81     0.45  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.517 on 5 degrees of freedom
Multiple R-squared:  0.879, Adjusted R-squared:  0.758 
F-statistic: 7.27 on 5 and 5 DF,  p-value: 0.0242

In the example above, despite we have a high adjusted R-squared, most of the covariates are not significant at 5% level (actually, only investment is significant in this context). There may be many problems in the regression above. During the Econ508 classes, you will learn how to solve those problems, and how to select the best specification for your model.

You can also run log-linear regressions. To do so, you type:

    model3<-lm(log(gdpgr)~log(consgr)+log(invgr)+log(producgr)+log(unemp)+log(inf))
    summary(model3)

Call:
lm(formula = log(gdpgr) ~ log(consgr) + log(invgr) + log(producgr) + 
    log(unemp) + log(inf))

Residuals:
      1       2       3       4       5       6       7       8       9 
 0.0250 -0.0925  0.0932 -0.0542 -0.1022  0.0810 -0.0768 -0.0360  0.1022 
     10      11 
-0.1845  0.2448 

Coefficients:
              Estimate Std. Error t value Pr(>|t|)  
(Intercept)     -0.991      0.788   -1.26    0.264  
log(consgr)      0.115      0.467    0.25    0.815  
log(invgr)       0.780      0.308    2.53    0.052 .
log(producgr)    0.095      0.194    0.49    0.644  
log(unemp)       0.201      0.372    0.54    0.612  
log(inf)         0.118      0.279    0.43    0.688  
---
Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1

Residual standard error: 0.173 on 5 degrees of freedom
Multiple R-squared:  0.878, Adjusted R-squared:  0.756 
F-statistic: 7.19 on 5 and 5 DF,  p-value: 0.0247

Finally, you can plot the vector of residuals as follows:

    resid3<-resid(model3)
    plot(year,resid3)

plot of chunk unnamed-chunk-44

You can also obtain the fitted values and different plots as follows:

    fit3<-fitted(model3)  #   This will generate a vector of fitted values for the model 3.
    par(mfrow=c(2,2))
    plot(model3)      #     This will generate default plots of residuals vs. fitted values, Normal  Q-Q, scale-location, and Cook's distance.

plot of chunk unnamed-chunk-45

Note here that we have added inline comments using the # symbol

Linear Hypothesis Testing

Suppose you want to check whether the variables investment, consumption, and productivity growth matter to GDP growth. In this context, you want to test if those variables matter simultaneously. The best way to check that in R is as follows. First, run a unrestricted model with all variables:

    u<-lm(log(gdpgr)~log(invgr)+log(consgr)+log(producgr)+log(unemp)+log(inf))

Then run a restricted model, discarding the variables under test:

r<-lm(log(gdpgr)~log(unemp)+log(inf))

Now you will run a F-test comparing the unrestricted to the restricted model. To do that, you will need to write the F-test function in R, as follows: (The theory comes from Johston and DiNardo (1997), p. 95, while the R code is a version of Greg Kordas’ S code adjusted for this specific problem.)

    F.test<-function(u,r){
        #u is the unrestricted model
        k<-length(coef(u))
        n<-length(resid(u))
        eeu<-sum(resid(u)^2)
        #r is the restricted model
        kr<-length(coef(r))
        eer<-sum(resid(r)^2)
        #q is the number of restrictions
        q<-k-kr
        #F-statistic
        Fstat<-((eer-eeu)/q)/(eeu/(n-k))
        #P-value
        Fprob<-1-pf(Fstat, q, n-k)
        list(Fstat=Fstat, Fprob=Fprob)
}

After that, you can run the test and obtain the F-statistic and p-value:

    F.test(u,r)$Fstat
[1] 11.4
    F.test(u,r)$Fprob
[1] 0.01128

And the conclusion is that you can reject the null hypothesis of joint non-significance at 1.13% level.

Creating your own functions in R

As we mentioned previously one of the strengths of R is that you can create your own functions. Actually many of the functions in R are just functions of functions.
The basic structure of a function is

One of the great strengths of R is the user’s ability to add functions. In fact, many of the functions in R are actually functions of functions. The structure of a function is given below.

    myfunction <- function(arg1, arg2, ...){
        statements
        return(object)
    }

You already created a function for the F-test in the above example, let’s try to create another one. For example obtaining the coefficients of a linear regression:

    lr <- function(y,X){
        X<-data.matrix(X)
        y<-data.matrix(y)
        Intercept<-rep(1,dim(X)[1])
        X<-cbind(Intercept,X)
        b<-solve(t(X)%*%X)%*%t(X)%*%y
        b
    }

The lr() function returns the coefficients of a OLS regression by calculating:

\[\hat{\beta}=(X'X)^{-1}X'y\]

you can check that the function actually returns the same values as the lm() function.

    lr(US90[,2],US90[,c(3,4,5)])
              [,1]
Intercept -0.31476
consgr     0.33600
invgr      0.29400
unemp      0.09553
    summary(lm(gdpgr~consgr+invgr+unemp))$coef
            Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.31476     1.2149 -0.2591  0.80302
consgr       0.33600     0.2707  1.2411  0.25455
invgr        0.29400     0.1100  2.6726  0.03188
unemp        0.09553     0.1506  0.6344  0.54597

Another nice thing about R is that you can create your own function and create a loop. For example,

    download<-function(x,folder){
        URL<- paste("http://www.econ.uiuc.edu/~econ508",folder,x,sep='/')
       destfile <- paste(folder, x, sep='/')
       download.file(URL, destfile)    
    }

I create a function that downloads a file from the Econ508 webpage and saves it in a desired folder

    names<-list("US90.txt", "giffen.dat", "giffen.csv", "gasq.data", "gasm.data", "AUTO2.DTA", "AUTO2.txt", "CPS.txt", "eggs.csv") 

Next I created a list with the name of the files I want to download, and then run a loop with `lapply’ that downloads and saves all this files in my computer in the folder “data”

    lapply(names, download, folder="data")

Final words

In this first e-TA I tried to convince you why you should use R as well to introduce you to some basic operations. The next e-TA is closely related to the first problem set and hopefully it will help you get the most out of Econ 508 and R.


  1. Please send comments to bottan2@illinois.edu or srmntbr2@illinois.edu