# The ivlewbel Package. A new way to Tackle Endogenous Regressor Models.

In April 2012, I wrote this blog post demonstrating an approach proposed in Lewbel (2012) that identifies endogenous regressor coefficients in a linear triangular system. Now I am happy to announce the release of the ivlewbel package, which contains a function through which Lewbel’s method can be applied in R. This package is now available to download on the CRAN.

Please see the example from the previous blog post replicated in the below. Additionally, it would be very helpful if people could comment on bugs and additional features they would like to add to the package. My contact details are in the about section of the blog.

```library(ivlewbel)

beta1 <- beta2 <- NULL
for(k in 1:500){
#generate data (including intercept)
x1 <- rnorm(1000,0,1)
x2 <- rnorm(1000,0,1)
u <- rnorm(1000,0,1)
s1 <- rnorm(1000,0,1)
s2 <- rnorm(1000,0,1)
ov <- rnorm(1000,0,1)
e1 <- u + exp(x1)*s1 + exp(x2)*s1
e2 <- u + exp(-x1)*s2 + exp(-x2)*s2
y1 <- 1 + x1 + x2 + ov + e2
y2 <- 1 + x1 + x2 + y1 + 2*ov + e1
x3 <- rep(1,1000)
dat <- data.frame(y1,y2,x3,x1,x2)

#record ols estimate
beta1 <- c(beta1,coef(lm(y2~x1+x2+y1))[4])
#init values for iv-gmm
beta2 <- c(beta2,lewbel(formula = y2 ~ y1 | x1 + x2 | x1 + x2, data = dat)\$coef.est[1,1])
}

library(sm)
d <- data.frame(rbind(cbind(beta1,"OLS"),cbind(beta2,"IV-GMM")))
d\$beta1 <- as.numeric(as.character(d\$beta1))
sm.density.compare(d\$beta1, d\$V2,xlab=("Endogenous Coefficient"))
title("Lewbel and OLS Estimates")
legend("topright", levels(d\$V2),lty=c(1,2,3),col=c(2,3,4),bty="n")
abline(v=1)
```

# IV Estimates via GMM with Clustering in R

In econometrics, generalized method of moments (GMM) is one estimation methodology that can be used to calculate instrumental variable (IV) estimates. Performing this calculation in R, for a linear IV model, is trivial. One simply uses the gmm() function in the excellent gmm package like an lm() or ivreg() function. The gmm() function will estimate the regression and return model coefficients and their standard errors. An interesting feature of this function, and GMM estimators in general, is that they contain a test of over-identification, often dubbed Hansen’s J-test, as an inherent feature. Therefore, in cases where the researcher is lucky enough to have more instruments than endogenous regressors, they should examine this over-identification test post-estimation.

While the gmm() function in R is very flexible, it does not (yet) allow the user to estimate a GMM model that produces standard errors and an over-identification test that is corrected for clustering. Thankfully, the gmm() function is flexible enough to allow for a simple hack that works around this small shortcoming. For this, I have created a function called gmmcl(), and you can find the code below. This is a function for a basic linear IV model. This code uses the gmm() function to estimate both steps in a two-step feasible GMM procedure. The key to allowing for clustering is to adjust the weights matrix after the second step. Interested readers can find more technical details regarding this approach here. After defining the function, I show a simple application in the code below.

```gmmcl = function(formula1, formula2, data, cluster){
library(plyr) ; library(gmm)
# create data.frame
data\$id1 = 1:dim(data)[1]
formula3 = paste(as.character(formula1)[3],"id1", sep=" + ")
formula4 = paste(as.character(formula1)[2], formula3, sep=" ~ ")
formula4 = as.formula(formula4)
formula5 = paste(as.character(formula2)[2],"id1", sep=" + ")
formula6 = paste(" ~ ", formula5, sep=" ")
formula6 = as.formula(formula6)
frame1 = model.frame(formula4, data)
frame2 = model.frame(formula6, data)
dat1 = join(data, frame1, type="inner", match="first")
dat2 = join(dat1, frame2, type="inner", match="first")

# matrix of instruments
Z1 = model.matrix(formula2, dat2)

# step 1
gmm1 = gmm(formula1, formula2, data = dat2,
vcov="TrueFixed", weightsMatrix = diag(dim(Z1)[2]))

# clustering weight matrix
cluster = factor(dat2[,cluster])
u = residuals(gmm1)
estfun = sweep(Z1, MARGIN=1, u,'*')
u = apply(estfun, 2, function(x) tapply(x, cluster, sum))
S = 1/(length(residuals(gmm1)))*crossprod(u)

# step 2
gmm2 = gmm(formula1, formula2, data=dat2,
vcov="TrueFixed", weightsMatrix = solve(S))
return(gmm2)
}

# generate data.frame
n = 100
z1 = rnorm(n)
z2 = rnorm(n)
x1 = z1 + z2 + rnorm(n)
y1 = x1 + rnorm(n)
id = 1:n

data = data.frame(z1 = c(z1, z1), z2 = c(z2, z2), x1 = c(x1, x1),
y1 = c(y1, y1), id = c(id, id))

summary(gmmcl(y1 ~ x1, ~ z1 + z2, data = data, cluster = "id"))
```