Thursday, September 19, 2013

The Case Against High-School Sports - Amanda Ripley - The Atlantic

The Case Against High-School Sports - Amanda Ripley - The Atlantic: "The Case Against High-School Sports
The United States routinely spends more tax dollars per high-school athlete than per high-school math student—unlike most countries worldwide. And we wonder why we lag in international education rankings?"

'via Blog this'

Saturday, September 14, 2013

Law School: Worth It?

I've been digesting the new paper "The Economic Value of a Law Degree," by Simkovic and McIntyre. It concludes:
"After controlling for observable ability sorting, we find that a law degree is associated with a 60 percent median increase in monthly earnings and 50 percent increase in median hourly wages. The mean annual earnings premium of a law degree is approximately $53,300 in 2012 dollars. The law degree earnings premium is cyclical and recent years are within historic norms."

The paper's introduction says: 

"The purpose of this article is to estimate, as closely as data permits, the causal effect on earnings of a particular type of education, the law degree. Rather than viewing law degree holders in isolation, we can get better estimates of the causal effect of education by comparing the earnings of individuals with law degrees to the earnings of similar individuals with bachelor’s degrees while being mindful of the statistical effects of selection into law school."

It's important for readers to know, however, that the article does not estimate the causal effect of law school. The authors are not running a randomized experiment, wherein otherwise identical college graduates are randomly assigned to go to law school or not. They do not rely on any exogenous shock to the availability of law school. They do not rely on any discontinuity in eligibility for law school, such that a regression discontinuity design would be possible. They do not have an instrumental variable that affects the availability of law school (but that has no effect on earnings through any other mechanism, which would mean the instrument violated the typical exclusion restriction).

Instead, they estimate the earnings of law school graduates compared to bachelor's degree holders that are similar in a number of observable ways. 

This is not sufficient to make a causal inference about the value of law school. The problem is the cliche that correlation is not causation. Even if one controls for observables, there may be quite significant differences between the people who attend law school and those who don't -- most notably, motivation and ambition, level of aggressiveness, and the like. Current datasets have no reliable way to control for these factors (self-reported answers to survey questions are not very useful given that it is socially unacceptable to answer, "I have zero ambition for my life" or "I am supremely committed to increasing my income at all costs," even if one of those is true). 

Take the two (hypothetical) people below, "John" and "Andrew." 


Let's say that they are both English majors at the same university, with identical SAT scores, identical grades, identical family backgrounds, and the like. If you ask them what level of income they would like to make, they even give similar answers. 

But in terms of their revealed behavior, "John" works in a coffee shop after he graduates (he never even tries to do anything else), while "Andrew" goes to law school. It is a good guess that a group of 1,000 Andrews is more ambitious and will likely end up earning more than a group of 1,000 Johns, completely apart from any effect of law school attendance.

Another way of putting it is that to make a causal inference, we have to be able to get some idea of the counterfactual -- that is, how much income would the Andrews make if they were prevented from going to law school and had to do something else instead. The ideal way to get a counterfactual, of course, is random assignment. If we could randomly assign only half of the Andrews to attend law school, then we would actually see what the other half of the Andrews do with themselves when not allowed to attend law school.

A good guess, though, is that if law school were ruled out, the other half of the Andrews might not be content with just a bachelor's degree. Many of them might consider an MBA, or a master's in accounting, or even medical school. This means, however, that when Simkovic and McIntyre compare law school graduates to the pool of bachelor's-only degree holders, they are not looking at a good counterfactual. 

Indeed, the economic value of law school could be negative. What if ambitious young people went to business school instead of law school? In the long run, they might earn only $10,000 less per year on average (as Simkovic and McIntyre say in footnote 33). At a discount rate of 3%, 30 years' of $10,000 in extra real income would be worth about $196,000 in present value terms. But they would also save a year's tuition (which, if borrowed, could add to debt payments for decades) and a year's lost salary in the near present. 

It's not too hard to envision that for many young people -- those who have high borrowing costs and high opportunity costs in the present, but who have less than average chances of being a high-earning lawyer -- the actual value of law school could well be negative.  Notably, this can be true even if Simkovic and McIntyre are right that these people earn more after going to law school than do "similar" bachelor's degree holders.  

It is thus improper to suggest that law school "causes" the Andrews to have higher earnings. Law school might have some causal role, to be sure, but that cannot be determined.

All of which is to say, the title of this paper is wrong. Rather than being titled "The Economic Value of a Law Degree," the paper should more accurately be titled, "The Economic Value of a Law Degree Mixed In Unidentifiable Proportion With The Economic Value Of Being The Sort of Ambitious Person Who Chooses To Go To Law School."

Heckman is Wrong

James Heckman (Chicago, Nobel prize winner) has been arguing that society should offer universal preschool for youngsters. His main evidence for this claim -- which I agree with, by the way -- is a couple of extremely small studies from the 1970s. What bothers me, however, is how he depicts these studies. To wit, here's what he writes in the New York Times:
Also holding back progress are those who claim that Perry and ABC are experiments with samples too small to accurately predict widespread impact and return on investment. This is a nonsensical argument. Their relatively small sample sizes actually speak for — not against — the strength of their findings. Dramatic differences between treatment and control-group outcomes are usually not found in small sample experiments, yet the differences in Perry and ABC are big and consistent in rigorous analyses of these data.
Contrary to what Heckman says, dramatic differences between treatment and control groups are MOST likely to show up in small samples. This is one of the most basic facts that any empirical scholar learns: when the sample size is small, sampling error will be the largest (for example, if a particular small sample happens to include a few outliers, that can swing the results in either direction rather dramatically). When the sample is large, that's when you'd expect to see smaller effects (e.g., in large samples, there's not as much opportunity for a few outliers to swing the results).

It's disturbing that Heckman seems to say the opposite of such a basic point.

Pascal Junod » An Aspiring Scientist’s Frustration with Modern-Day Academia: A Resignation