There were four posts yesterday. The first was “Density:”
There’s an interesting discussion underway about whether rising population necessarily leads to rising land prices over time. Bill McBride of Calculated Risk says yes:
A key reason for the upward slope in real house prices is because some areas are land constrained, and with an increasing population, the value of land increases faster than inflation.
Noah Smith disagrees, because land isn’t really particularly scarce — when we see high land prices in some metro areas, it’s all about the agglomeration effects, which could go in various directions over time.
My first reaction to Smith’s comment was that it might not matter very much, because new metropolitan areas are very hard to create; there may be plenty of land around Lubbock, but nobody’s going to move there, so a growing population has to squeeze into the metropolitan areas we have. But then I realized that this might not be the last word either; even if people want to stay in existing metro areas, they can hive off “edge cities” at the, um, edges of these metro areas, so that the relevant population density — the density that makes land in or near urban hubs expensive — might not rise even if the overall population of the metro area goes up.
And we have data! Via Richard Florida, new work by the Census (pdf) calculates “population-weighted density” — a weighted average of density across census tracts, where the tracts are weighted not by land area but by population; this gives a much better idea of how the average person lives.
As Florida points out, the new measure conveys a much better sense of how metros differ. For example, by the standard density measure Los Angeles is actually denser than New York, basically because LA is hemmed in by mountains, limiting how far the sprawl/commuting zone can reach. But New York has an urban core in a way that LA does not, and sure enough, it has a much, much higher population-weighted density.
What I wanted, however, was trends — and the Census has calculated this measure both for metros and for national aggregates for both 2000 and 2010. Here’s what it looks like:
So, a couple of points. First, although America is a vast, thinly populated country, with fewer than 90 people per square mile, the average American lives in a quite densely populated neighborhood, with more than 5000 people per square mile. The next time someone talks about small towns as the “real America”, bear in mind that the real real America — the America in which most Americans live — looks more or less like metropolitan Baltimore.
Second, however, although the US population and hence the population density rose about 10 percent over the course of the naughties, the average American was living in a somewhat less dense neighborhood in 2010 than in 2000, as population spread out within metropolitan areas. If you like, we’re becoming a bit less a nation of Bostons and a bit more a nation of Houstons.
This is, I think, a picture of urban geography in which the link between overall rising population and land prices is likely to be diffuse at best. So I think I call this one for Smith — although McBride’s point that actual real housing prices do seem to have an upward trend remains important, and needs explaining.
The second post of the day was “Holy Coding Error, Batman:”
The intellectual edifice of austerity economics rests largely on two academic papers that were seized on by policy makers, without ever having been properly vetted, because they said what the Very Serious People wanted to hear. One was Alesina/Ardagna on the macroeconomic effects of austerity, which immediately became exhibit A for those who wanted to believe in expansionary austerity. Unfortunately, even aside from the paper’s failure to distinguish between episodes in which monetary policy was available and those in which it wasn’t, it turned out that their approach to measuring austerity was all wrong; when the IMF used a measure that tracked actual policy, it turned out that contractionary policy was contractionary.
The other paper, which has had immense influence — largely because in the VSP world it is taken to have established a definitive result — was Reinhart/Rogoff on the negative effects of debt on growth. Very quickly, everyone “knew” that terrible things happen when debt passes 90 percent of GDP.
Some of us never bought it, arguing that the observed correlation between debt and growth probably reflected reverse causation. But even I never dreamed that a large part of the alleged result might reflect nothing more profound than bad arithmetic.
But it seems that this is just what happened. Mike Konczal has a good summary of a review by Herndon, Ash, and Pollin. According to the review paper, R-R mysteriously excluded data on some high-debt countries with decent growth immediately after World War II, which would have greatly weakened their result; they used an eccentric weighting scheme in which a single year of bad growth in one high-debt country counts as much as multiple years of good growth in another high-debt country; and they dropped a whole bunch of additional data through a simple coding error.
Fix all that, say Herndon et al., and the result apparently melts away.
If true, this is embarrassing and worse for R-R. But the really guilty parties here are all the people who seized on a disputed research result, knowing nothing about the research, because it said what they wanted to hear.
Wanna bet that the results will still be touted as valid? The penultimate post yesterday was “Reinhart-Rogoff, Continued:”
I was going to post something sort of kind of defending Reinhart-Rogoff in the wake of the new revelations — not their results, which I never believed, nor their failure to carefully test their results for robustness, but rather their motives. But their response to the new critique is really, really bad.
What Herndon et al did was find that the R-R results on the relationship between debt and growth were partly the result of a coding error, partly the result of some very odd choices about which data to exclude and how to weight the data that remained. The effect of fixing these lapses was to raise the estimated mean growth of highly indebted countries by more than 2 percentage points.
So how do R-R respond?
First, they argue that another measure — median growth — isn’t that different from the Herndon et al results. But that is, first of all, an apples-and-oranges comparison — the fact is that when you compare the results head to head, R-R looks very off. Something went very wrong, and pointing to your other results isn’t a good defense.
Second, they say that they like to emphasize the median results, which are much milder than the mean results; but what everyone using their work likes to cite is the strong result, and if R-R have made a major effort to disabuse people of the notion that debt has huge negative effects on growth, I haven’t noticed it.
Third, they point out that even cleaned-up data do show a negative association between debt and growth. Yes, but that’s where the issue of reverse causation comes in. More on that in a second.
Finally, while they acknowledge the issue of reverse causation, they seem very much to be trying to have it both ways — saying yes, we know about the issue, but then immediately reverting to talking as if debt was necessarily causing slow growth rather than the other way around.
So, about the slow growth/debt connection: I’ve done a quick and dirty mini-RR for the period 1950-2007 (starting 1950 because that’s where the Total Economy Database starts), focusing only on the G7. If you look at the scatterplot, there does seem to be an association between high debt and slow growth:
But I’ve coded the points by country — and if you look at it, you see that most of the apparent relationship is coming from Italy and Japan; Britain didn’t seem to suffer much from its high debt in the 1950s. And it’s quite clear from the history that both Italy and (especially) Japan ran up high debts as a consequence of their growth slowdowns, not the other way around.
So this is really disappointing; they’re basically evading the critique. And that’s a terrible thing when so much is at stake.
Well, it would appear that they’re true believers and/or intentionally fudged the results. The last post of the day was “Golden Rules:”
Barry Ritholtz has a great piece on the rules of goldbuggery. Nothing to add, so go read.