# IV Estimates via GMM with Clustering in R

In econometrics, generalized method of moments (GMM) is one estimation methodology that can be used to calculate instrumental variable (IV) estimates. Performing this calculation in R, for a linear IV model, is trivial. One simply uses the gmm() function in the excellent gmm package like an lm() or ivreg() function. The gmm() function will estimate the regression and return model coefficients and their standard errors. An interesting feature of this function, and GMM estimators in general, is that they contain a test of over-identification, often dubbed Hansen’s J-test, as an inherent feature. Therefore, in cases where the researcher is lucky enough to have more instruments than endogenous regressors, they should examine this over-identification test post-estimation.

While the gmm() function in R is very flexible, it does not (yet) allow the user to estimate a GMM model that produces standard errors and an over-identification test that is corrected for clustering. Thankfully, the gmm() function is flexible enough to allow for a simple hack that works around this small shortcoming. For this, I have created a function called gmmcl(), and you can find the code below. This is a function for a basic linear IV model. This code uses the gmm() function to estimate both steps in a two-step feasible GMM procedure. The key to allowing for clustering is to adjust the weights matrix after the second step. Interested readers can find more technical details regarding this approach here. After defining the function, I show a simple application in the code below.

gmmcl = function(formula1, formula2, data, cluster){
library(plyr) ; library(gmm)
# create data.frame


# Detecting Weak Instruments in R

Weak Instruments

Any instrumental variables (IV) estimator relies on two key assumptions in order to identify causal effects:

1. That the excluded instrument or instruments only effect the dependent variable through their effect on the endogenous explanatory variable or variables (the exclusion restriction),
2. That the correlation between the excluded instruments and the endogenous explanatory variables is strong enough to permit identification.

The first assumption is difficult or impossible to test, and shear belief plays a big part in what can be perceived to be a good IV. An interesting paper was published last year in the Review of Economics and Statistics by Conley, Hansen, and Rossi (2012), wherein the authors provide a Bayesian framework that permits researchers to explore the consequences of relaxing exclusion restrictions in a linear IV estimator. It will be interesting to watch research on this topic expand in the coming years.

Fortunately, it is possible to quantitatively measure the strength of the relationship between the IVs and the endogenous variables. The so-called weak IV problem was underlined in paper by Bound, Jaeger, and Baker (1995). When the relationship between the IVs and the endogenous variable is not sufficiently strong, IV estimators do not correctly identify causal effects.

The Bound, Jaeger, and Baker paper represented a very important contribution to the econometrics literature. As a result of this paper, empirical studies that use IV almost always report some measure of the instrument strength. A secondary result of this paper was the establishment of a literature that evaluates different methods of testing for weak IVs. Staiger and Stock (1997) furthered this research agenda, formalizing the relevant asymptotic theory and recommending the now ubiquitous “rule-of-thumb” measure: a first-stage partial-F test of less than 10 indicates the presence of weak instruments.

In the code below, I have illustrated how one can perform these partial F-tests in R. The importance of clustered standard errors has been highlighted on this blog before, so I also show how the partial F-test can be performed in the presence of clustering (and heteroskedasticity too). To obtain the clustered variance-covariance matrix, I have adapted some code kindly provided by Ian Gow. For completeness, I have displayed the clustering function at the end of the blog post.

# load packages
library(AER) ; library(foreign) ; library(mvtnorm)
# clear workspace and set seed
rm(list=ls())
set.seed(100)

# number of observations
n = 1000
# simple triangular model:
# y2 = b1 + b2x1 + b3y1 + e
# y1 = a1 + a2x1 + a3z1 + u
# error terms (u and e) correlate
Sigma = matrix(c(1,0.5,0.5,1),2,2)
ue = rmvnorm(n, rep(0,2), Sigma)
# iv variable
z1 = rnorm(n)
x1 = rnorm(n)
y1 = 0.3 + 0.8*x1 - 0.5*z1 + ue[,1]
y2 = -0.9 + 0.2*x1 + 0.75*y1 +ue[,2]
# create data
dat = data.frame(z1, x1, y1, y2)

# biased OLS
lm(y2 ~ x1 + y1, data=dat)
# IV (2SLS)
ivreg(y2 ~ x1 + y1 | x1 + z1, data=dat)

# do regressions for partial F-tests
# first-stage:
fs = lm(y1 ~ x1 + z1, data = dat)
# null first-stage (i.e. exclude IVs):
fn = lm(y1 ~ x1, data = dat)

# simple F-test
waldtest(fs, fn)$F[2] # F-test robust to heteroskedasticity waldtest(fs, fn, vcov = vcovHC(fs, type="HC0"))$F[2]

####################################################
# now lets get some F-tests robust to clustering

# generate cluster variable
dat$cluster = 1:n # repeat dataset 10 times to artificially reduce standard errors dat = dat[rep(seq_len(nrow(dat)), 10), ] # re-run first-stage regressions fs = lm(y1 ~ x1 + z1, data = dat) fn = lm(y1 ~ x1, data = dat) # simple F-test waldtest(fs, fn)$F[2]
# ~ 10 times higher!
# F-test robust to clustering
waldtest(fs, fn, vcov = clusterVCV(dat, fs, cluster1="cluster"))$F[2] # ~ 10 times lower than above (good)  Further “rule-of-thumb” measures are provided in a paper by Stock and Yogo (2005) and it should be noted that whole battery of weak-IV tests exist (for example, see the Kleinberg-Paap rank Wald F-statistic and Anderson-Rubin Wald test) and one should perform these tests if the presence of weak instruments represents a serious concern. # R function adapted from Ian Gows' webpage: # http://www.people.hbs.edu/igow/GOT/Code/cluster2.R.html. clusterVCV <- function(data, fm, cluster1, cluster2=NULL) { require(sandwich) require(lmtest) # Calculation shared by covariance estimates est.fun <- estfun(fm) inc.obs <- complete.cases(data[,names(fm$model)])

# Shared data for degrees-of-freedom corrections
N  <- dim(fm$model)[1] NROW <- NROW(est.fun) K <- fm$rank

# Calculate the sandwich covariance estimate
cov <- function(cluster) {
cluster <- factor(cluster)

# Calculate the "meat" of the sandwich estimators
u <- apply(est.fun, 2, function(x) tapply(x, cluster, sum))
meat <- crossprod(u)/N

# Calculations for degrees-of-freedom corrections, followed
# by calculation of the variance-covariance estimate.
# NOTE: NROW/N is a kluge to address the fact that sandwich uses the
# wrong number of rows (includes rows omitted from the regression).
M <- length(levels(cluster))
dfc <- M/(M-1) * (N-1)/(N-K)
dfc * NROW/N * sandwich(fm, meat=meat)
}

# Calculate the covariance matrix estimate for the first cluster.
cluster1 <- data[inc.obs,cluster1]
cov1  <- cov(cluster1)

if(is.null(cluster2)) {
# If only one cluster supplied, return single cluster
# results
return(cov1)
} else {
# Otherwise do the calculations for the second cluster
# and the "intersection" cluster.
cluster2 <- data[inc.obs,cluster2]
cluster12 <- paste(cluster1,cluster2, sep="")

# Calculate the covariance matrices for cluster2, the "intersection"
# cluster, then then put all the pieces together.
cov2   <- cov(cluster2)
cov12  <- cov(cluster12)
covMCL <- (cov1 + cov2 - cov12)

# Return the output of coeftest using two-way cluster-robust
# standard errors.
return(covMCL)
}
}



# Endogenous Spatial Lags for the Linear Regression Model

Over the past number of years, I have noted that spatial econometric methods have been gaining popularity. This is a welcome trend in my opinion, as the spatial structure of data is something that should be explicitly included in the empirical modelling procedure. Omitting spatial effects assumes that the location co-ordinates for observations are unrelated to the observable characteristics that the researcher is trying to model. Not a good assumption, particularly in empirical macroeconomics where the unit of observation is typically countries or regions.

Starting out with the prototypical linear regression model: $y = X \beta + \epsilon$, we can modify this equation in a number of ways to account for the spatial structure of the data. In this blog post, I will concentrate on the spatial lag model. I intend to examine spatial error models in a future blog post.

The spatial lag model is of the form: $y= \rho W y + X \beta + \epsilon$, where the term $\rho W y$ measures the potential spill-over effect that occurs in the outcome variable if this outcome is influenced by other unit’s outcomes, where the location or distance to other observations is a factor in for this spill-over. In other words, the neighbours for each observation have greater (or in some cases less) influence to what happens to that observation, independent of the other explanatory variables $(X)$. The $W$ matrix is a matrix of spatial weights, and the $\rho$ parameter measures the degree of spatial correlation. The value of $\rho$ is bounded between -1 and 1. When $\rho$ is zero, the spatial lag model collapses to the prototypical linear regression model.

The spatial weights matrix should be specified by the researcher. For example, let us have a dataset that consists of 3 observations, spatially located on a 1-dimensional Euclidean space wherein the first observation is a neighbour of the second and the second is a neighbour of the third. The spatial weights matrix for this dataset should be a $3 \times 3$ matrix, where the diagonal consists of 3 zeros (you are not a neighbour with yourself). Typically, this matrix will also be symmetric. It is also at the user’s discretion to choose the weights in $W$. Common schemes include nearest k neighbours (where k is again at the users discretion), inverse-distance, and other schemes based on spatial contiguities. Row-standardization is usually performed, such that all the row elements in $W$ sum to one. In our simple example, the first row of a contiguity-based scheme would be: [0, 1, 0]. The second: [0.5, 0, 0.5]. And the third: [0, 1, 0].

While the spatial-lag model represents a modified version of the basic linear regression model, estimation via OLS is problematic because the spatially lagged variable $(Wy)$ is endogenous. The endogeneity results from what Charles Manski calls the ‘reflection problem’: your neighbours influence you, but you also influence your neighbours. This feedback effect results in simultaneity which renders bias on the OLS estimate of the spatial lag term. A further problem presents itself when the independent variables $(X)$ are themselves spatially correlated. In this case, completely omitting the spatial lag from the model specification will bias the $\beta$ coefficient values due to omitted variable bias.

Fortunately, remedying these biases is relatively simple, as a number of estimators exist that will yield an unbiased estimate of the spatial lag, and consequently the coefficients for the other explanatory variables—assuming, of course, that these explanatory variables are themselves exogenous. Here, I will consider two: the Maximum Likelihood estimator (denoted ML) as described in Ord (1975), and a generalized two-stage least squares regression model (2SLS) wherein spatial lags, and spatial lags lags (i.e. $W^{2} X$) of the explanatory variables are used as instruments for $Wy$. Alongside these two models, I also estimate the misspecified OLS both without (OLS1) and with (OLS2) the spatially lagged dependent variable.

To examine the properties of these four estimators, I run a Monte Carlo experiment. First, let us assume that we have 225 observations equally spread over a $15 \times 15$ lattice grid. Second, we assume that neighbours are defined by what is known as the Rook contiguity, so a neighbour exists if they are in the grid space either above or below or on either side. Once we create the spatial weight matrix we row-standardize.

Taking our spatial weights matrix as defined, we want to simulate the following linear model: $y = \rho Wy + \beta_{1} + \beta_{2}x_{2} + \beta_{3}x_{3} + \epsilon$, where we set $\rho=0.4$ , $\beta_{1}=0.5$, $\beta_{2}=-0.5$, $\beta_{3}=1.75$. The explanatory variables are themselves spatially autocorrelated, so our simulation procedure first simulates a random normal variable for both $x_{2}$ and $x_{3}$ from: $N(0, 1)$, then assuming a autocorrelation parameter of $\rho_{x}=0.25$, generates both variables such that: $x_{j} = (1-\rho_{x}W)^{-1} N(0, 1)$ for $j \in \{ 1,2 \}$. In the next step we simulate the error term $\epsilon$. We introduce endogeneity into the spatial lag by assuming that the error term $\epsilon$ is a function of a random normal $v$ so $\epsilon = \alpha v + N(0, 1)$ where $v = N(0, 1)$ and $\alpha=0.2$, and that the spatial lag term includes $v$. We modify the regression model to incorporate this: $y = \rho (Wy + v) + \beta_{1} + \beta_{2}x_{2} + \beta_{3}x_{3} + \epsilon$. From this we can calculate the reduced form model: $y = (1 - \rho W)^{-1} (\rho v + \beta_{1} + \beta_{2}x_{2} + \beta_{3}x_{3} + \epsilon)$, and simulate values for our dependent variable $y$.

Performing 1,000 repetitions of the above simulation permits us to examine the distributions of the coefficient estimates produced by the four models outlined in the above. The distributions of these coefficients are displayed in the graphic in the beginning of this post. The spatial autocorrelation parameter $\rho$ is in the bottom-right quadrant. As we can see, the OLS model that includes the spatial effect but does not account for simultaneity (OLS2) over-estimates the importance of spatial spill-overs. Both the ML and 2SLS estimators correctly identify the $\rho$ parameter. The remaining quadrants highlight what happens to the coefficients of the explanatory variables. Clearly, the OLS1 estimator provides the worst estimate of these coefficients. Thus, it appears preferable to use OLS2, with the biased autocorrelation parameter, than the simpler OLS1 estimator. However, the OLS2 estimator also yields biased parameter estimates for the $\beta$ coefficients. Furthermore, since researchers may want to know the marginal effects in spatial equilibrium (i.e. taking into account the spatial spill-over effects) the overestimated $\rho$ parameter creates an additional bias.

To perform these calculations I used the spdep package in R, with the graphic created via ggplot2. Please see the R code I used in the below.

library(spdep) ; library(ggplot2) ; library(reshape)

rm(list=ls())
n = 225
data = data.frame(n1=1:n)
# coords
data$lat = rep(1:sqrt(n), sqrt(n)) data$long = sort(rep(1:sqrt(n), sqrt(n)))
# create W matrix
wt1 = as.matrix(dist(cbind(data$long, data$lat), method = "euclidean", upper=TRUE))
wt1 = ifelse(wt1==1, 1, 0)
diag(wt1) = 0
# row standardize
rs = rowSums(wt1)
wt1 = apply(wt1, 2, function(x) x/rs)
lw1 = mat2listw(wt1, style="W")

rx = 0.25
rho = 0.4
b1 = 0.5
b2 = -0.5
b3 = 1.75
alp = 0.2

inv1 = invIrW(lw1, rho=rx, method="solve", feasible=NULL)
inv2 = invIrW(lw1, rho=rho, method="solve", feasible=NULL)

sims = 1000
beta1results = matrix(NA, ncol=4, nrow=sims)
beta2results = matrix(NA, ncol=4, nrow=sims)
beta3results = matrix(NA, ncol=4, nrow=sims)
rhoresults = matrix(NA, ncol=3, nrow=sims)

for(i in 1:sims){
u1 = rnorm(n)
x2 = inv1 %*% u1
u2 = rnorm(n)
x3 = inv1 %*% u2
v1 = rnorm(n)
e1 = alp*v1 + rnorm(n)
data1 = data.frame(cbind(x2, x3),lag.listw(lw1, cbind(x2, x3)))
names(data1) = c("x2","x3","wx2","wx3")

data1$y1 = inv2 %*% (b1 + b2*x2 + b3*x3 + rho*v1 + e1) data1$wy1 = lag.listw(lw1, data1$y1) data1$w2x2 = lag.listw(lw1, data1$wx2) data1$w2x3 = lag.listw(lw1, data1$wx3) data1$w3x2 = lag.listw(lw1, data1$w2x2) data1$w3x3 = lag.listw(lw1, data1$w2x3) m1 = coef(lm(y1 ~ x2 + x3, data1)) m2 = coef(lm(y1 ~ wy1 + x2 + x3, data1)) m3 = coef(lagsarlm(y1 ~ x2 + x3, data1, lw1)) m4 = coef(stsls(y1 ~ x2 + x3, data1, lw1)) beta1results[i,] = c(m1[1], m2[1], m3[2], m4[2]) beta2results[i,] = c(m1[2], m2[3], m3[3], m4[3]) beta3results[i,] = c(m1[3], m2[4], m3[4], m4[4]) rhoresults[i,] = c(m2[2],m3[1], m4[1]) } apply(rhoresults, 2, mean) ; apply(rhoresults, 2, sd) apply(beta1results, 2, mean) ; apply(beta1results, 2, sd) apply(beta2results, 2, mean) ; apply(beta2results, 2, sd) apply(beta3results, 2, mean) ; apply(beta3results, 2, sd) colnames(rhoresults) = c("OLS2","ML","2SLS") colnames(beta1results) = c("OLS1","OLS2","ML","2SLS") colnames(beta2results) = c("OLS1","OLS2","ML","2SLS") colnames(beta3results) = c("OLS1","OLS2","ML","2SLS") rhoresults = melt(rhoresults) rhoresults$coef = "rho"
rhoresults$true = 0.4 beta1results = melt(beta1results) beta1results$coef = "beta1"
beta1results$true = 0.5 beta2results = melt(beta2results) beta2results$coef = "beta2"
beta2results$true = -0.5 beta3results = melt(beta3results) beta3results$coef = "beta3"
beta3results$true = 1.75 data = rbind(rhoresults,beta1results,beta2results,beta3results) data$Estimator = data$X2 ggplot(data, aes(x=value, colour=Estimator, fill=Estimator)) + geom_density(alpha=.3) + facet_wrap(~ coef, scales= "free") + geom_vline(aes(xintercept=true)) + scale_y_continuous("Density") + scale_x_continuous("Effect Size") + opts(legend.position = 'bottom', legend.direction = 'horizontal')  # How Much Should Bale Cost Real? It looks increasingly likely that Gareth Bale will transfer from Tottenham to Real Madrid for a world record transfer fee. Negotiations are ongoing, with both parties keen to get the best deal possible deal with the transfer fee. Reports speculate that this transfer fee will be anywhere in the very wide range of £80m to £120m. Given the topical nature of this transfer saga, I decided to explore the world record breaking transfer fee data, and see if these data can help predict what the Gareth Bale transfer fee should be. According to this Wikipedia article, there have been 41 record breaking transfers, from Willie Groves going from West Brom to Aston Villa in 1893 for £100, to Cristiano Ronaldo’s £80m 2009 transfer to Real Madrid from Manchester United. When comparing any historical price data it is very important that we are comparing like with like. Clearly, a fee of £100 in 1893 is not the same as £100 in 2009. Therefore, the world record transfer fees need to be adjusted for inflation. To do this, I used the excellent measuringworth website, and converted all of the transfer fees into 2011 pounds sterling. The plot above demonstrates a very strong linear relationship between logged real world record transfer fees and time. The R-squared indicates that the year of the transfer fee explains roughly 97% of the variation in price. So, if Real Madrid are to pay a world transfer fee for Bale, how much does this model predict the fee will be? The above plot demonstrates what happens when the simple log-linear model is extrapolated to predict the world record transfer fee in 2013. The outcome here is 18.37, so around £96m, in 2011 prices. We can update this value to 2013 prices. Assuming a modest inflation rate of 2% we get £96m[exp(0.02*2)]=£99.4m. No small potatoes. rm(list=ls()) bale = read.csv("bale.csv") # data from: # http://en.wikipedia.org/wiki/World_football_transfer_record # http://www.measuringworth.com/ukcompare/ ols1 = lm(log(real2011)~year, bale) # price exp(predict(ols1,data.frame(year=2013))) # inflate lets say 2% inflation exp(predict(ols1,data.frame(year=2013)))*exp(0.02*2) # nice ggplot library(ggplot2) bale$lnprice2011 = log(balereal2011) addon = data.frame(year=2013,nominal=0,real2011=0,name="Bale?", lnprice2011=predict(ols1,data.frame(year=2013))) ggplot(bale, aes(x=year, y=lnprice2011, label=name)) + geom_text(hjust=0.4, vjust=0.4) + stat_smooth(method = "lm",fullrange = TRUE, level = 0.975) + theme_bw(base_size = 12, base_family = "") + xlim(1885, 2020) + ylim(8, 20) + xlab("Year") + ylab("ln(Price)") + ggtitle("World Transfer Records, Real 2011 Prices (£)")+ geom_point(aes(col="red"),size=4,data=addon) + geom_text(aes(col="red", fontface=3),hjust=-0.1, vjust=0,size=7,data=addon) + theme(legend.position="none")  # The Frisch–Waugh–Lovell Theorem for Both OLS and 2SLS The Frisch–Waugh–Lovell (FWL) theorem is of great practical importance for econometrics. FWL establishes that it is possible to re-specify a linear regression model in terms of orthogonal complements. In other words, it permits econometricians to partial out right-hand-side, or control, variables. This is useful in a variety of settings. For example, there may be cases where a researcher would like to obtain the effect and cluster-robust standard error from a model that includes many regressors, and therefore a computationally infeasible variance-covariance matrix. Here are a number of practical examples. The first just takes a simple linear regression model, with two regressors: x1 and x2. To partial out the coefficients on the constant term and x2, we first regress x2 on y1 and save the residuals. We then regress x2 on x1 and save the residuals. The final stage regresses the second residuals on the first. The following code illustrates how one can obtain an identical coefficient on x1 by applying the FWL theorem. x1 = rnorm(100) x2 = rnorm(100) y1 = 1 + x1 - x2 + rnorm(100) r1 = residuals(lm(y1 ~ x2)) r2 = residuals(lm(x1 ~ x2)) # ols coef(lm(y1 ~ x1 + x2)) # fwl ols coef(lm(r1 ~ -1 + r2))  FWL is also relevant for all linear instrumental variable (IV) estimators. Here, I will show how this extends to the 2SLS estimator, where slightly more work is required compared to the OLS example in the above. Here we have a matrix of instruments (Z), exogenous variables (X), and endogenous variables (Y1). Let us imagine we want the coefficient on one endogenous variable y1. In this case we can apply FWL as follows. Regress X on each IV in Z in separate regressions, saving the residuals. Then regress X on y1, and X on y2, saving the residuals for both. In the last stage, perform a two-stage-least-squares regression of the X on y2 residuals on the X on y2 residuals using the residuals from X on each Z as instruments. An example of this is shown in the below code. library(sem) ov = rnorm(100) z1 = rnorm(100) z2 = rnorm(100) y1 = rnorm(100) + z1 + z2 + 1.5*ov x1 = rnorm(100) + 0.5*z1 - z2 x2 = rnorm(100) y2 = 1 + y1 - x1 + 0.3*x2 + ov + rnorm(100) r1 = residuals(lm(z1 ~ x1 + x2)) r2 = residuals(lm(z2 ~ x1 + x2)) r3 = residuals(lm(y1 ~ x1 + x2)) r4 = residuals(lm(y2 ~ x2 + x2)) # biased coef on y1 as expected for ols coef(lm(y2~y1+x1+x2)) # 2sls coef(tsls(y2~y1+x1+x2,~z1+z2+x1+x2)) # fwl 2sls coef(tsls(r4~-1+r3,~-1+r1+r2))  The FWL can also be extended to cases where there are multiple endogenous variables. I have demonstrated this case by extending the above example to model x1 as an endogenous variable. # 2 endogenous variables r5 = residuals(lm(z1 ~ x2)) r6 = residuals(lm(z2 ~ x2)) r7 = residuals(lm(y1 ~ x2)) r8 = residuals(lm(x1 ~ x2)) r9 = residuals(lm(y2 ~ x2)) # 2sls coefficients p1 = fitted.values(lm(y1~z1+z2+x2)) p2 = fitted.values(lm(x1~z1+z2+x2)) lm(y2 ~ p1 + p2 + x2) # 2sls fwl coefficients p3 = fitted.values(lm(r7~-1+r5+r6)) p4 = fitted.values(lm(r8~-1+r5+r6)) lm(r9 ~ p3 + p4)  # Kalkalash! Pinpointing the Moments “The Simpsons” became less Cromulent Whenever somebody mentions “The Simpsons” it always stirs up feelings of nostalgia in me. The characters, uproarious gags, zingy one-liners, and edgy animation all contributed towards making, arguably, the greatest TV ever. However, it’s easy to forget that as a TV show “The Simpsons” is still ongoing—in its twenty-fourth season no less. For me, and most others, the latter episodes bear little resemblance to older ones. The current incarnation of the show is stale, and has been for a long time. I haven’t watched a new episode in over ten years, and don’t intend to any time soon. When did this decline begin? Was it part of a slow secular trend, or was there a sudden drop in the quality, from which there was no recovery? To answer these questions I use the Global Episode Opinion Survey (GEOS) episode ratings data, which are published online. A simple web scrape of the “all episodes” page provides me with 423 episode ratings, spanning from the first episode of season 1, to the third episode of season 20. After S20E03, the ratings become too sparse, which is probably a function of how bad the show, in its current condition, is. To detect changepoints in show ratings, I have used the R package changepoint. An informative introduction of both the package and changepoint analyses can be found in this accompanying vignette. The figure above provides a summary of my results. Five breakpoints were detected. The first occurring in the first episode of the ninth season: The City of New York Vs. Homer Simpson. Most will remember this; Homer goes to New York to collect his clamped car and ends up going berserk. Good episode, although this essentially marked the beginning of the end. According to the changepoint results, the decline occurred in three stages. The first lasted from the New York episode up until episode 11 in season 10. The shows in this stage have an average rating of about 7, and the episode where the breakpoint is detected is: Wild Barts Can’t Be Broken. The next stage roughly marks my transition, as it is about this point that I stopped watching. This stage lasts as far as S15E09, whereupon the show suffers the further ignominy of another ratings downgrade. Things possibly couldn’t get any worse, and they don’t, as the show earns a minor reprieve after the twentieth episode of season 18. So now you know. R code for the analysis can be found in the below. # packages library(Hmisc) ; library(changepoint) # clear ws rm(list=ls()) # webscrape data page1 = "http://www.geos.tv/index.php/list?sid=159&collection=all" home1 = readLines(con<-url(page1)); close(con) # pick out lines with ratings means = '<td width="60px" align="right" nowrap>' epis = home1[grep(means, home1)] epis = epis[57:531] epis = epis[49:474] # prune data loc = function(x) substring.location(x,"</span>")first[1]
epis = data.frame(epis)
epis = cbind(epis,apply(epis, 1, loc))
epis$cut = NA for(i in 1:dim(epis)[1]){ epis[i,3] = substr(epis[i,1], epis[i,2]-4, epis[i,2]-1) } #create data frame ts1 = data.frame(rate=epis$cut, episode=50:475)
# remove out of season shows and movie
ts1 = ts1[!(ts1$episode %in% c(178,450,451)),] # make numeric ts1$rate = as.numeric(as.character(ts1$rate)) # changepoint function mean2.pelt = cpt.mean(ts1$rate,method='PELT')

# plot results
plot(mean2.pelt,type='l',cpt.col='red',xlab='Episode',
ylab='Average Rating',cpt.width=2, main="Changepoints in 'The Simpsons' Ratings")

# what episodes ?
# The City of New York vs. Homer Simpson
# Wild Barts Can't Be Broken
# I, (Annoyed Grunt)-Bot -
# Stop Or My Dog Will Shoot!