# Undemocratic Democracies

I’ve always thought that it was ironic that most countries with the word “Democratic” in their name are exceptionally un-democratic in reality. So I was very interested in the following post on Reddit this week showing exactly this (https://www.reddit.com/r/dataisbeautiful/comments/7nkyek/countries_with_the_word_democratic_in_their/).

However, there are only 8 countries that have “democratic” in their name: People’s Democratic Republic of Algeria, Democratic Republic of the Congo, Democratic Republic of Timor-Leste, Federal Democratic Republic of Ethiopia, Lao People’s Democratic Republic, Democratic People’s Republic of Korea, Federal Democratic Republic of Nepal, and the Democratic Socialist Republic of Sri Lanka. This and the fact that I am back teaching next week got me thinking that this might be a nice example to show how two-sample t-tests work.

The 8 “democratic” countries have an overall democracy index score of 3.89 with a sample standard deviation of 2.174942. This contrasts with a mean and standard deviation of 5.602013 and 2.172326 for the remaining 159 countries in the Economist Intelligence Unit’s democracy index (https://en.wikipedia.org/wiki/Democracy_Index).

Let’s conduct a two-sample t-test that assumes equal variances to test whether this difference in means is statistically significant. The null hypothesis is that both sample means are equal. This is a two-tailed test. The test statistic follows Gosset’s t-distribution with n1+n2-2 degrees of freedom (159+8-2=165). The test statistic is calculated (formula here: https://en.wikipedia.org/wiki/Student%27s_t-test):

s = sqrt(((158*2.172326^2)+(7*2.174942^2))/(159+8-2))
t = (5.602013-3.89)/(s*(sqrt(1/159+1/8)))=2.1749

which allows us to reject at the conventional alpha of 0.05. The p-value is ~0.03 meaning we would not be able to reject at the 1% level. Interestingly, if you do not follow the equal variance assumption, you can no longer reject the null at the 5% level.

Hopefully, this example will be of interest to people teaching stats and econometrics for undergrads!

```rm(list = ls())
other <- c(9.93, 9.5,	9.39,	9.26,	9.2,	9.15,	9.15,	9.09,	9.03,	9.01,	8.81,	8.8,	8.63,	8.41,	8.39,	8.36,	8.3,	8.28,	8.17,	7.99,	7.98,	7.98,	7.94,	7.92,	7.92,	7.88,	7.87,	7.86,	7.85,	7.85,	7.82,	7.81,	7.79,	7.78,	7.77,	7.65,	7.51,	7.47,	7.41,	7.39,	7.31,	7.29,	7.23,	7.13,	7.1,	7.01,	6.97,	6.96,	6.94,	6.9,	6.83,	6.77,	6.75,	6.75,	6.72,	6.67,	6.67,	6.65,	6.64,	6.62,	6.62,	6.59,	6.57,	6.54,	6.47,	6.42,	6.4,	6.38,	6.31,	6.27,	6.25,	6.21,	6.03,	6.01,	5.99,	5.93,	5.92,	5.92,	5.91,	5.81,	5.76,	5.73,	5.72,	5.7,	5.7,	5.67,	5.64,	5.63,	5.55,	5.33,	5.31,	5.26,	5.23,	5.07,	5.04,	4.93,	4.93,	4.92,	4.87,	4.86,	4.81,	4.77,	4.7,	4.68,	4.55,	4.5,	4.49,	4.33,	4.27,	4.2,	4.08,	4.02,	4.02,	3.96,	3.96,	3.96,	3.88,	3.85,	3.81,	3.74,	3.71,	3.54,	3.46,	3.46,	3.4,	3.38,	3.32,	3.31,	3.24,	3.18,	3.14,	3.14,	3.07,	3.06,	3.05,	3.04,	3.03,	2.91,	2.91,	2.83,	2.79,	2.75,	2.65,	2.55,	2.4,	2.37,	2.37,	2.34,	2.25,	2.06,	1.98,	1.95,	1.93,	1.89,	1.83,	1.7,	1.61,	1.5,	1.43)
demo <- c(7.24, 6.48, 4.86, 3.6, 3.56, 2.37, 1.93, 1.08)

mean(other) ; sd(other) ; length(other)
mean(demo) ; sd(demo) ; length(demo)

t.test(other, demo, var.equal = T)
s = sqrt(((158*2.172326^2)+(7*2.174942^2))/(159+8-2))
t = (5.602013-3.89)/(s*(sqrt(1/159+1/8)))

t.test(other, demo, var.equal = F)

library(ggplot2)
data1 <- data.frame(Score = other, Name = "No")
data2 <- data.frame(Score = demo, Name = "Yes")
data <- rbind(data1, data2)

ggplot(data, aes(Name, Score)) +
geom_boxplot(fill="lightblue") +
theme_bw() +
xlab("Democratic in Name") +
ylab("Democracy Score")

```

# How Predictable is the English Premier League?

The reason why football is so exciting is uncertainty. The outcome of any match or league is unknown, and you get to watch the action unfold without knowing what’s going to happen. Watching matches where you know the score is never exciting.

This weekend the English Premier League season will conclude with little fanfare. Bar one relegation place, the league positions have already been determined. In fact, these positions were, for the most part, decided weeks ago. The element of uncertainty seems to have been reduced this season.

With this in mind, I wanted to look at uncertainty over the long run in English football. To do this used the data provided by http://www.football-data.co.uk/ and analyzed these with R. These data consist of 34,740 matches played in the top 5 divisions of English football between 2000 and 2015, containing information about both the result and the odds offered by bookies on this result.

To measure the uncertainty of any given match I used the following strategy. First, I averaged across all bookies’ odds for the three possible events: home win, draw, and away win. Next I mapped these aggregated odds into probabilities by inverting each of the odds and then dividing by the summed inverted odds. This takes care of the over round that helps bookies to make a profit. For example, if the odds were 2.1/1 that an event happens and 2.1/1 that it doesn’t then the probability of the event occurring is:

(1/2.1)/ (1/2.1 + (1/2.1)) = 0.4761905/(0.4761905+0.4761905) = 0.5.

Finally, to measure the uncertainty of each match, I subtract the probability that the event occurred from 1, to calculate a “residual” score. Imagine a home win occurs. The “residual” in this case will be 1-P(home win). If P(home win)=1, then there is no uncertainty, and this uncertainty score will be zero. Since there are 3 outcomes, we would expect an uncertainty measure to be bounded between 0 (no uncertainty) and 0.67 (pure uncertainty) where we get 1 out of 3right by just guessing.

After importing these data into R and calculating the uncertainty measure, I looked at this uncertainty measure over time. The plot in the above shows fitted smoothed trend lines of uncertainty, stratified by division. These trends are striking. Going by this graph, the Premier League has gotten more predictable over the analysis period. In 2000, the uncertainty measure was around 0.605. Given that we expect this measure to be bound between 0 (complete certainty) and 0.67 (completely random), this tell us that the average league game was very unpredictable. Over time, however, this measure has decreased by about 5%, which does not seem like much. Despite, the somewhat unexciting end to the 2014/15 season, the outcome of the average game is still not very predictable.

Noticeably, in lower league games there is even greater uncertainty. In fact, the average uncertainty measure of League 2 games approached a value of 0.65 in 2014. This indicates that the average League 2 game is about as unpredictable as playing rock-paper-scissors. Interestingly, and unlike the Premier League, there does not appear to be any discernible change over time. The games are just as unpredictable now as they were in 2000. Please see my R code below.

```# clear
rm(list=ls())

# libraries
library(ggplot2)

# what are urls

years = c(rep("0001",4), rep("0102",4), rep("0203",4), rep("0405",4),
rep("0506",5), rep("0607",5), rep("0708",5), rep("0809",5),
rep("0910",5), rep("1011",5), rep("1112",5), rep("1213",5),
rep("1314",5), rep("1415",5))
divis = c(rep(c("E0","E1","E2","E3"),4), rep(c("E0","E1","E2","E3","EC"),10))

urls = paste(years, divis, sep="/")
urls = paste("http://www.football-data.co.uk/mmz4281", urls, sep="/")

odds = c("B365H","B365D","B365A",
"BSH","BSD","BSA",
"BWH","BWD","BWA",
"GBH","GBD","GBA",
"IWH","IWD","IWA",
"LBH","LBD","LBA",
"PSH","PSD","PSA",
"SOH","SOD","SOA",
"SBH","SBD","SBA",
"SJH","SJD","SJA",
"SYH","SYD","SYA",
"VCH","VCD","VCA",
"WHH","WHD","WHA")
home = odds[seq(1,length(odds),3)]
draw = odds[seq(2,length(odds),3)]
away = odds[seq(3,length(odds),3)]

# load all data in a loop
full.data = NULL
for(i in 1:length(urls)){
# calculate average odds
temp\$homeodds = apply(temp[,names(temp) %in% home], 1, function(x) mean(x,na.rm=T))
temp\$drawodds = apply(temp[,names(temp) %in% draw], 1, function(x) mean(x,na.rm=T))
temp\$awayodds = apply(temp[,names(temp) %in% away], 1, function(x) mean(x,na.rm=T))
temp = temp[,c("Div","Date","FTHG","FTAG","FTR","homeodds","drawodds","awayodds")]
full.data = rbind(full.data, temp)
}

full.data\$homewin = ifelse(full.data\$FTR=="H", 1, 0)
full.data\$draw = ifelse(full.data\$FTR=="D", 1, 0)
full.data\$awaywin = ifelse(full.data\$FTR=="A", 1, 0)

# convert to probs with overrind
full.data\$homeprob = (1/full.data\$homeodds)/(1/full.data\$homeodds+1/full.data\$drawodds+1/full.data\$awayodds)
full.data\$drawprob = (1/full.data\$drawodds)/(1/full.data\$homeodds+1/full.data\$drawodds+1/full.data\$awayodds)
full.data\$awayprob = (1/full.data\$awayodds)/(1/full.data\$homeodds+1/full.data\$drawodds+1/full.data\$awayodds)

# bookie residual
full.data\$bookieres = 1-full.data\$homeprob
full.data\$bookieres[full.data\$FTR=="D"] = 1-full.data\$drawprob[full.data\$FTR=="D"]
full.data\$bookieres[full.data\$FTR=="A"] = 1-full.data\$awayprob[full.data\$FTR=="A"]

# now plot over time
full.data\$time = ifelse(nchar(as.character(full.data\$Date))==8,
as.Date(full.data\$Date,format='%d/%m/%y'),
as.Date(full.data\$Date,format='%d/%m/%Y'))
full.data\$date = as.Date(full.data\$time, origin = "1970-01-01")

full.data\$Division = "Premier League"
full.data\$Division[full.data\$Div=="E1"] = "Championship"
full.data\$Division[full.data\$Div=="E2"] = "League 1"
full.data\$Division[full.data\$Div=="E3"] = "League 2"
full.data\$Division[full.data\$Div=="EC"] = "Conference"

full.data\$Division = factor(full.data\$Division, levels = c("Premier League", "Championship", "League 1",
"League 2","Conference"))

ggplot(full.data, aes(date, bookieres, colour=Division)) +
stat_smooth(size = 1.25, alpha = 0.2) +
labs(x = "Year", y = "Uncertainty") +
theme_bw() +
theme(legend.position="bottom") +
theme(axis.text=element_text(size=20),
axis.title=element_text(size=20),
legend.title = element_text(size=20),
legend.text = element_text(size=20))
```

# Coal and the Conservatives

Interesting election results in the UK over the weekend, where the Conservatives romped to victory. This was despite a widespread consensus that neither the Conservative or Labour party would get a majority. This was a triumph for uncertainty and random error over the deterministic, as none of the statistical forecasts appeared to deem such a decisive victory probable. The UK election is a lot harder to model, for numerous reasons, when compared to the US.

This means that a lot of pollsters and political forecasters will have to go back to the drawing board and re-evaluate their methods. Obviously, the models used to forecast the 2015 election could not handle the dynamics of the British electorate. However, there is a high degree of persistence within electuary constituencies. Let’s explore this persistence by looking at the relationship between coal and % Conservative (Tory) votes.

Following a tweet by Vaughan Roderick and using the methodology of Fernihough and O’Rourke (2014), I matched each of the constituencies to Britain’s coalfields creating a “proximity to coal” measure. What the plot below shows is striking. Being located on or in close proximity to a coal field reduces the tory vote share by about 20%. When we control (linearly) for latitude and longitude coordinates, this association decreases in strength, but not by much. For me, this plot highlights a long-standing relationship between Britain’s industrial revolution, the urban working class, and labour/union movement. What I find interesting is that this relationship has persisted despite de-industrialization and the movement away from large-scale manufacturing industry.

```> summary(lm(tory~coal,city))

Call:
lm(formula = tory ~ coal, data = city)

Residuals:
Min      1Q  Median      3Q     Max
-42.507 -10.494   2.242  10.781  29.074

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept)  42.9492     0.7459   57.58   <2e-16 ***
coal        -24.9704     1.8887  -13.22   <2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 14.36 on 630 degrees of freedom
Multiple R-squared:  0.2172,	Adjusted R-squared:  0.216
F-statistic: 174.8 on 1 and 630 DF,  p-value: < 2.2e-16

# robust to lat-long?
> summary(lm(tory~coal+longitude+latitude,city))

Call:
lm(formula = tory ~ coal + longitude + latitude, data = city)

Residuals:
Min      1Q  Median      3Q     Max
-44.495  -8.269   1.485   9.316  28.911

Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 246.4355    18.9430  13.009  < 2e-16 ***
coal        -15.1616     1.8697  -8.109 2.68e-15 ***
longitude     1.4023     0.4015   3.493 0.000512 ***
latitude     -3.8621     0.3651 -10.578  < 2e-16 ***
---
Signif. codes:  0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1

Residual standard error: 12.76 on 628 degrees of freedom
Multiple R-squared:  0.3838,	Adjusted R-squared:  0.3809
F-statistic: 130.4 on 3 and 628 DF,  p-value: < 2.2e-16

```

# Linear Models with Multiple Fixed Effects

Estimating a least squares linear regression model with fixed effects is a common task in applied econometrics, especially with panel data. For example, one might have a panel of countries and want to control for fixed country factors. In this case the researcher will effectively include this fixed identifier as a factor variable, and then proceed to estimate the model that includes as many dummy variables (minus one if an intercept is included in the modelling equation) as there are countries. Obviously, this approach is computationally problematic when there are many fixed factors. In our simple example, an extra country will add an extra column to the $X$ matrix used in the least squares calculation.

Fortunately, there are a number of data transformations that can be used in this panel setting. These include demeaning each within unit observation, using first differences, or including the group means as additional explanatory variables (as suggested by (Mundlak 1978)). However, these approaches only work well when there is one factor that the researcher wants to include fixed effects to account for.

Simen Gaure offers a solution this problem that allows for multiple fixed effects without resorting to a computationally burdensome methodology. Essentially the solution involves an elaboration of the group demeaning transformation mentioned in the above. More technical details can be found here or by referring to Gaure’s forthcoming article in Computational Statistics & Data Analysis. Those interested in implementing this estimation strategy in R can use the lfe package available on CRAN.

In the below, I have included a simple example of how the package works. In this example, the model needs to be set up to calculate fixed effects for two factor variables. Obviously, adding 2,000 columns to the data frame is not a convenient way to estimate the model that includes fixed effects for both the x2 and x3 variables. However, the felm function tackles this problem with ease. Stata has a similar function to feml, areg, although the areg function only allows for absorbed fixed effects in one variable.

```# clear workspace
rm(list=ls())
library(lfe)

# create data frame
x1 <- rnorm(10000)
x2 <- rep(1:1000,10)
x3 <- rep(1:1000,10)
e1 <- sin(x2) + 0.02*x3^2 + rnorm(10000)
y <- 10 + 2.5*x1 + (e1-mean(e1))
dat <- data.frame(x1,x2,x3,y)

# simple lm
lm(y~x1)
# lm with fixed effects
felm(dat\$y ~ dat\$x1 + G(dat\$x2) + G(dat\$x3))

##############################################
# output
##############################################
# simple lm
> lm(y~x1)
Call: lm(formula = y ~ x1)
Coefficients:
(Intercept)           x1
10.47       -36.95
> # lm with fixed effects
> felm(dat\$y ~ dat\$x1 + G(dat\$x2) + G(dat\$x3))
dat\$x1
2.501
```

# Dummies for Dummies

Most R functions used in econometrics convert factor variables into a set of dummy/binary variables automatically. This is useful when estimating a linear model, saving the user from the laborious activity of manually including the dummy variables as regressors. However, what if you want to reshape your dataframe so that it contains such dummy variables?

The following function, datdum(.), is a simple workaround. The first argument is the factor variable (which can also be a character), the second is the dataframe and the third is the name you want to call these dummy variables.

```datdum <- function(x, data, name){
data\$rv <- rnorm(dim(data)[1],1,1)
mm <- data.frame(model.matrix(lm(data\$rv~-1+factor(data[,x]))))
names(mm) <- paste(name,1:dim(mm)[2],sep=".")
data\$rv <- NULL
data <- cbind(data,mm)
return(data)
}

# simple example
dat <- c("A","B","C")
dat <- data.frame(dat)
datdum(x="dat",data=dat,name="category")
#########################
# output
#########################
> dat
dat
1   A
2   B
3   C
> datdum(x="dat",data=dat,name="category")
dat category.1 category.2 category.3
1   A          1          0          0
2   B          0          1          0
3   C          0          0          1
```

# Instrumental Variables without Traditional Instruments

Typically, regression models in empirical economic research suffer from at least one form of endogeneity bias.

The classic example is economic returns to schooling, where researchers want to know how much increased levels of education affect income. Estimation using a simple linear model, regressing income on schooling, alongside a bunch of control variables, will typically not yield education’s true effect on income. The problem here is one of omitted variables – notably unobserved ability. People who are more educated may be more motivated or have other unobserved characteristics which simultaneously affect schooling and future lifetime earnings.

Endogeneity bias plagues empirical research. However, there are solutions, the most common being instrumental variables (IVs). Unfortunately, the exclusion restrictions needed to justify the use of traditional IV methodology may be impossible to find.

So, what if you have an interesting research question, some data, but endogeneity with no IVs. You should give up, right? Wrong. According to Lewbel (forthcoming in Journal of Business and Economic Statistics), it is possible to overcome the endogeneity problem without the use of a traditional IV approach.

Lewbel’s paper demonstrates how higher order moment restrictions can be used to tackle endogeneity in triangular systems. Without going into too much detail (interested readers can consult Lewbel’s paper), this method is like the traditional two-stage instrumental variable approach, except the first-stage exclusion restriction is generated by the control, or exogenous, variables which we know are heteroskedastic (interested practitioners can test for this in the usual way, i.e. a White test).

In the code below, I demonstrate how one could employ this approach in R using the GMM framework outlined by Lewbel. My code only relates to a simple example with one endogenous variable and two exogenous variables. However, it would be easy to modify this code depending on the model.

```rm(list=ls())
library(gmm)
# gmm function for 1 endog variable with 2 hetero exogenous variable
# outcome in the first column of 'datmat', endog variable in second
# constant and exog variables in the next three
# hetero exog in the last two (i.e no constant)
g1 <- function(theta, datmat) {
#set up data
y1 <- matrix(datmat[,1],ncol=1)
y2 <- matrix(datmat[,2],ncol=1)
x1 <- matrix(datmat[,3:5],ncol=3)
z1 <- matrix(datmat[,4:5],ncol=2)
# if the variable in the 4th col was not hetero
# this could be modified so:
# z1 <- matrix(datmat[,5],ncol=1)

#set up moment conditions
in1 <- (y1 -theta[1]*x1[,1]-theta[2]*x1[,2]-theta[3]*x1[,3])
M <- NULL
for(i in 1:dim(z1)[2]){
M <- cbind(M,(z1[,i]-mean(z1[,i])))
}
in2 <- (y2 -theta[4]*x1[,1]-theta[5]*x1[,2]-theta[6]*x1[,3]-theta[7]*y1)
for(i in 1:dim(x1)[2]){M <- cbind(M,in1*x1[,i])}
for(i in 1:dim(x1)[2]){M <- cbind(M,in2*x1[,i])}
for(i in 1:dim(z1)[2]){M <- cbind(M,in2*((z1[,i]-mean(z1[,i]))*in1))}
return(M)
}
# so estimation is easy
# gmm(function(...), data matrix, initial values vector)
# e.g : gmm(g1, x =as.matrix(dat),c(1,1,1,1,1,1,1))
```

I also tested the performance of Lewbel’s GMM estimator in comparison a mis-specified OLS estimator. In the code below, I perform 500 simulations of a triangular system containing an omitted variable. For the GMM estimator, it is useful to have good initial starting values. In this simple example, I use the OLS coefficients. In more complicated settings, it is advisable to use the estimates from the 2SLS procedure outlined in Lewbel’s paper. The distributions of the coefficient estimates are shown in the plot below. The true value, indicated by the vertical line, is one. It is pretty evident that the Lewbel approach works very well. I think this method could be very useful in a number of research disciplines.

```beta1 <- beta2 <- NULL
for(k in 1:500){
#generate data (including intercept)
x1 <- rnorm(1000,0,1)
x2 <- rnorm(1000,0,1)
u <- rnorm(1000,0,1)
s1 <- rnorm(1000,0,1)
s2 <- rnorm(1000,0,1)
ov <- rnorm(1000,0,1)
e1 <- u + exp(x1)*s1 + exp(x2)*s1
e2 <- u + exp(-x1)*s2 + exp(-x2)*s2
y1 <- 1 + x1 + x2 + ov + e2
y2 <- 1 + x1 + x2 + y1 + 2*ov + e1
x3 <- rep(1,1000)
dat <- cbind(y1,y2,x3,x1,x2)

#record ols estimate
beta1 <- c(beta1,coef(lm(y2~x1+x2+y1))[4])
#init values for iv-gmm
init <- c(coef(lm(y2~x1+x2+y1)),coef(lm(y1~x1+x2)))
#record gmm estimate
beta2 <- c(beta2,coef(gmm(g1, x =as.matrix(dat),init))[7])
}

library(sm)
d <- data.frame(rbind(cbind(beta1,"OLS"),cbind(beta2,"IV-GMM")))
d\$beta1 <- as.numeric(as.character(d\$beta1))
sm.density.compare(d\$beta1, d\$V2,xlab=("Endogenous Coefficient"))
title("Lewbel and OLS Estimates")
legend("topright", levels(d\$V2),lty=c(1,2,3),col=c(2,3,4),bty="n")
abline(v=1)
```