Krugman’s blog, 10/30/14

October 31, 2014

There was one post yesterday, “Google’s Analysis of Japan:”

It keeps trying to autocorrect “Domo arigato” to “Doom activator”. It may have a point, although as I have been saying, the West has messed up so badly that Japan’s response to the burst bubble is starting to look good by comparison.

In somewhat related snippets, Noah Smith on people who can’t admit that they were wrong about QE, and Peter Coy on why Keynes is more relevant than ever.

Brooks, Cohen and Krugman

October 31, 2014

In “Our Machine Masters” Bobo says the age of artificial intelligence is finally at hand, and he has a question: Will we master it, or will it master us?  Mr. Cohen, in “An Old Man in Prague,” says Sir Nicholas Winton saved Jewish children, and for decades said nothing. The deed speaks for itself.  In “Apologizing to Japan” Prof. Krugman says Western economists were scathing in their criticisms of Japanese policy, but the slump we fell into isn’t just similar to Japan’s. It’s worse.  Here’s Bobo:

Some days I think nobody knows me as well as Pandora. I create a new music channel around some band or song and Pandora feeds me a series of songs I like just as well. In fact, it often feeds me songs I’d already downloaded onto my phone from iTunes. Either my musical taste is extremely conventional or Pandora is really good at knowing what I like.

In the current issue of Wired, the technology writer Kevin Kelly says that we had all better get used to this level of predictive prowess. Kelly argues that the age of artificial intelligence is finally at hand.

He writes that the smart machines of the future won’t be humanlike geniuses like HAL 9000 in the movie “2001: A Space Odyssey.” They will be more modest machines that will drive your car, translate foreign languages, organize your photos, recommend entertainment options and maybe diagnose your illnesses. “Everything that we formerly electrified we will now cognitize,” Kelly writes. Even more than today, we’ll lead our lives enmeshed with machines that do some of our thinking tasks for us.

This artificial intelligence breakthrough, he argues, is being driven by cheap parallel computation technologies, big data collection and better algorithms. The upshot is clear, “The business plans of the next 10,000 start-ups are easy to forecast: Take X and add A.I.”

Two big implications flow from this. The first is sociological. If knowledge is power, were about to see an even greater concentration of power.

The Internet is already heralding a new era of centralization. As Astra Taylor points out in her book, “The People’s Platform,” in 2001, the top 10 websites accounted for 31 percent of all U.S. page views, but, by 2010, they accounted for 75 percent of them. Gigantic companies like Google swallow up smaller ones. The Internet has created a long tail, but almost all the revenue and power is among the small elite at the head.

Advances in artificial intelligence will accelerate this centralizing trend. That’s because A.I. companies will be able to reap the rewards of network effects. The bigger their network and the more data they collect, the more effective and attractive they become.

As Kelly puts it, “Once a company enters this virtuous cycle, it tends to grow so big, so fast, that it overwhelms any upstart competitors. As a result, our A.I. future is likely to be ruled by an oligarchy of two or three large, general-purpose cloud-based commercial intelligences.”

To put it more menacingly, engineers at a few gigantic companies will have vast-though-hidden power to shape how data are collected and framed, to harvest huge amounts of information, to build the frameworks through which the rest of us make decisions and to steer our choices. If you think this power will be used for entirely benign ends, then you have not read enough history.

The second implication is philosophical. A.I. will redefine what it means to be human. Our identity as humans is shaped by what machines and other animals can’t do. For the last few centuries, reason was seen as the ultimate human faculty. But now machines are better at many of the tasks we associate with thinking — like playing chess, winning at Jeopardy, and doing math.

On the other hand, machines cannot beat us at the things we do without conscious thinking: developing tastes and affections, mimicking each other and building emotional attachments, experiencing imaginative breakthroughs, forming moral sentiments.

In the age of smart machines, we’re not human because we have big brains. We’re human because we have social skills, emotional capacities and moral intuitions. I could paint two divergent A.I. futures, one deeply humanistic, and one soullessly utilitarian.

In the humanistic one, machines liberate us from mental drudgery so we can focus on higher and happier things. In this future, differences in innate I.Q. are less important. Everybody has Google on their phones so having a great memory or the ability to calculate with big numbers doesn’t help as much.

In this future, there is increasing emphasis on personal and moral faculties: being likable, industrious, trustworthy and affectionate. People are evaluated more on these traits, which supplement machine thinking, and not the rote ones that duplicate it.

In the cold, utilitarian future, on the other hand, people become less idiosyncratic. If the choice architecture behind many decisions is based on big data from vast crowds, everybody follows the prompts and chooses to be like each other. The machine prompts us to consume what is popular, the things that are easy and mentally undemanding.

I’m happy Pandora can help me find what I like. I’m a little nervous if it so pervasively shapes my listening that it ends up determining what I like. I think we all want to master these machines, not have them master us.

Next up we have Mr. Cohen:

An old man went to Prague this week. He had spent much of his life keeping quiet about his deeds. They spoke for themselves. Now he said, “In a way perhaps I shouldn’t have lived so long to give everybody the opportunity to exaggerate everything in the way they are doing today.”

At the age of 105, Sir Nicholas Winton is still inclined toward self-effacement. He did what any normal human being would, only at a time when most of Europe had gone mad. A London stockbroker, born into a family of German Jewish immigrants who had changed their name from Wertheim and converted to Christianity, he rescued 669 children, most of them Jews, from Nazi-occupied Czechoslovakia in 1939. They came to Britain in eight transports. The ninth was canceled when Hitler invaded Poland on Sept. 1, 1939. The 250 children destined for it journeyed instead into the inferno of the Holocaust.

Winton, through family connections, knew enough of the Third Reich to see the naïveté of British officialdom still inclined to dismiss Hitler as a buffoon and talk of another war as fanciful. He raised money; he procured visas; he found foster families. His day job was at the Stock Exchange. The rest of his time he devoted to saving the doomed. There were enough bystanders. He wanted to help. Now he has outlived many of those he saved and long enough to know that thousands of their descendants owe their lives to him.

Back in Prague, 75 years on, Winton received the Order of the White Lion, the highest honor of the Czech Republic. The Czech Air Force sent a plane. He was serenaded at Prague Castle, in the presence of a handful of his octogenarian “children.” The only problem, he said, was that countries refused to accept unaccompanied children; only England would. One hundred years, he said, is “a heck of a long time.” The things he said were understated. At 105, one does not change one’s manner.

Only in 1988 did Winton’s wartime work begin to be known. His wife found a scrapbook chronicling his deeds. He appeared on a BBC television show whose host, Esther Rantzen, asked those in the audience who owed their lives to him to stand. Many did. Honors accrued. Now there are statues of him in London and Prague. “I didn’t really keep it secret,” he once said. “I just didn’t talk about it.”

Such discretion is riveting to our exhibitionist age. To live today is to self-promote or perish. Social media tugs the private into the public sphere with an almost irresistible force. Be followed, be friended — or be forgotten. This imperative creates a great deal of tension and unhappiness. Most people, much of the time, have a need to be quiet and still, and feel disinclined to raise their voice. Yet they sense that if they do not, they risk being seen as losers. Device anxiety, that restless tug to the little screen, is a reflection of a spreading inability to live without 140-character public affirmation. When the device is dead, so are you.

What gets forgotten, in the cacophony, is how new this state of affairs is. Winton’s disinclination to talk was not unusual. Silence was the reflex of the postwar generation. What was done was done because it was the right thing to do and therefore unworthy of note. Certainly among Jews silence was the norm. Survivors scarcely spoke of their torment. They did not tell their children. They repressed their memories. Perhaps discretion seemed the safer course; certainly it seemed the more dignified. Perhaps the very trauma brought wordlessness. The Cold War was not conducive to truth-telling. Anguish was better suffered in silence than passed along (although of course it filtered to the next generation anyway.)

But there was something else, something really unsayable. Survival itself was somehow shameful, unbearable. By what right, after all, had one lived when those 250 children had not? Menachem Begin, the former Israeli prime minister whose parents and brother were killed by the Nazis, put this sentiment well: “Against the eyes of every son of the nation appear and reappear the carriages of death. … The Black Nights when the sound of an infernal screeching of wheels and the sighs of the condemned press in from afar and interrupt one’s slumber; to remind one of what happened to mother, father, brothers, to a son, a daughter, a People. In these inescapable moments every Jew in the country feels unwell because he is well. He asks himself: Is there not something treasonous in his existence.”

Winton’s anonymity, for decades after the war, was of course also the result of the silence or reserve of the hundreds he had saved. How strange that seems today, when we must emote about everything.

The deed speaks — and occasionally someone lives long enough to know in what degree.

And now here’s Prof. Krugman, who is in Tokyo:

For almost two decades, Japan has been held up as a cautionary tale, an object lesson on how not to run an advanced economy. After all, the island nation is the rising superpower that stumbled. One day, it seemed, it was on the road to high-tech domination of the world economy; the next it was suffering from seemingly endless stagnation and deflation. And Western economists were scathing in their criticisms of Japanese policy.

I was one of those critics; Ben Bernanke, who went on to become chairman of the Federal Reserve, was another. And these days, I often find myself thinking that we ought to apologize.

Now, I’m not saying that our economic analysis was wrong. The paper I published in 1998 about Japan’s “liquidity trap,” or the paper Mr. Bernanke published in 2000 urging Japanese policy makers to show “Rooseveltian resolve” in confronting their problems, have aged fairly well. In fact, in some ways they look more relevant than ever now that much of the West has fallen into a prolonged slump very similar to Japan’s experience.

The point, however, is that the West has, in fact, fallen into a slump similar to Japan’s — but worse. And that wasn’t supposed to happen. In the 1990s, we assumed that if the United States or Western Europe found themselves facing anything like Japan’s problems, we would respond much more effectively than the Japanese had. But we didn’t, even though we had Japan’s experience to guide us. On the contrary, Western policies since 2008 have been so inadequate if not actively counterproductive that Japan’s failings seem minor in comparison. And Western workers have experienced a level of suffering that Japan has managed to avoid.

What policy failures am I talking about? Start with government spending. Everyone knows that in the early 1990s Japan tried to boost its economy with a surge in public investment; it’s less well-known that public investment fell rapidly after 1996 even as the government raised taxes, undermining progress toward recovery. This was a big mistake, but it pales by comparison with Europe’s hugely destructive austerity policies, or the collapse in infrastructure spending in the United States after 2010. Japanese fiscal policy didn’t do enough to help growth; Western fiscal policy actively destroyed growth.

Or consider monetary policy. The Bank of Japan, Japan’s equivalent of the Federal Reserve, has received a lot of criticism for reacting too slowly to the slide into deflation, and then for being too eager to raise interest rates at the first hint of recovery. That criticism is fair, but Japan’s central bank never did anything as wrongheaded as the European Central Bank’s decision to raise rates in 2011, helping to send Europe back into recession. And even that mistake is trivial compared with the awesomely wrongheaded behavior of the Riksbank, Sweden’s central bank, which raised rates despite below-target inflation and relatively high unemployment, and appears, at this point, to have pushed Sweden into outright deflation.

The Swedish case is especially striking because the Riksbank chose to ignore one of its own deputy governors: Lars Svensson, a world-class monetary economist who had worked extensively on Japan, and who had warned his colleagues that premature rate increases would have exactly the effects they did, in fact, have.

So there are really two questions here. First, why has everyone seemed to get this so wrong? Second, why has the West, with all its famous economists — not to mention the ability to learn from Japan’s woe — made an even worse mess than Japan did?

The answer to the first question, I think, is that responding effectively to depression conditions requires abandoning conventional respectability. Policies that would ordinarily be prudent and virtuous, like balancing the budget or taking a firm stand against inflation, become recipes for a deeper slump. And it’s very hard to persuade influential people to make that adjustment — just look at the Washington establishment’s inability to give up on its deficit obsession.

As for why the West has done even worse than Japan, I suspect that it’s about the deep divisions within our societies. In America, conservatives have blocked efforts to fight unemployment out of a general hostility to government, especially a government that does anything to help Those People. In Europe, Germany has insisted on hard money and austerity largely because the German public is intensely hostile to anything that could be called a bailout of southern Europe.

I’ll be writing more soon about what’s happening in Japan now, and the new lessons the West should be learning. For now, here’s what you should know: Japan used to be a cautionary tale, but the rest of us have messed up so badly that it almost looks like a role model instead.

Blow and Collins

October 30, 2014

In “The Ebola Hysteria” Mr. Blow says that amid the nonsense, paladins heeding the clarion call to help treat Ebola abroad are being treated like lepers when they return.  Ms. Collins has questions today in “A Political Crystal Ball:”  What if Republicans become the majority in the Senate after the election next week? Would anything really change much?  Here’s Mr. Blow:

The absolute hysteria surrounding the Ebola crisis underscores what is wrong with our politics and the policies they spawn.

On Ebola, the possible has overtaken the probable, gobbling it up in a high-anxiety, low-information frenzy of frayed nerves and Purell-ed hands.

There have been nine cases of Ebola in this country. All but one, a Liberian immigrant, is alive.

We aren’t battling a virus in this country as much as a mania, one whipped up by reactionary politicians and irresponsible media. We should be following the science in responding to the threat, but instead we are being led by silliness. And that comes at heavy cost.

The best way to prevent Ebola from becoming a pandemic is to stop it at its source — in West Africa, where the disease is truly exacting a heavy toll with thousands dead and thousands more infected. But the countries in that region can’t do it alone. They need help. The president of the World Bank, Jim Yong Kim, said on Tuesday, “We’ll need a steady state of at least 5,000 health workers from outside the region” to fight Ebola in West Africa. That means health care workers from other countries, including ours.

Many of our health care workers are heroically heeding the clarion call. They are volunteering to head into harm’s way, to put their own lives on the line to save others and to prevent the disease from spreading further. But upon returning to this country some now risk “mandatory quarantine” even if they test negative for the disease and are asymptomatic. (Ebola can be spread only when a patient expresses symptoms.)

The public face of the affront to basic science, civil liberties and displays of valor has become the nurse Kaci Hickox. She accepted an assignment with Doctors Without Borders in Ebola-plagued Sierra Leone. But upon returning to the United States, she was quarantined in a plastic tent in a Newark hospital even after testing negative for the virus. She has been transferred to Maine, but there is a state trooper stationed outside the house where she’s staying.

Hickox is a paladin being treated like a leper.

As Hickox wrote in the Dallas Morning News:

“I had spent a month watching children die, alone. I had witnessed human tragedy unfold before my eyes. I had tried to help when much of the world has looked on and done nothing.”

It would be bad enough if there were just a momentary inconvenience or a legally contestable rights infringement. But it may be more than that. It could deter other health care workers like Hickox from volunteering in the first place.

In other words, irrational governors, like Chris Christie of New Jersey, taking ill-advised steps to control the spread of the disease on a local level could help it to spread on a global one.

That is in part why overly aggressive state-level restrictions have been roundly condemned.

A spokesman for the United Nations secretary general, Ban Ki-moon, said this week:

“Returning health workers are exceptional people who are giving of themselves for humanity. They should not be subjected to restrictions that are not based on science. Those who develop infections should be supported, not stigmatized.”

But stigma feels right as rain for some folks.

When Dr. Kent Brantly, a missionary caring for Ebola patients in Liberia, became the first known American Ebola patient, Ann Coulter called him “idiotic” and chastised him for the “Christian narcissism” of deigning to help people in “disease-ridden cesspools” rather than, say, turning “one single Hollywood power-broker to Christ,” which would apparently “have done more good for the entire world than anything he could accomplish in a century spent in Liberia.”

Oh, the irony of Coulter using this flummery to blast Brantly as idiotic.

This that’s-their-problem-we-have-our-own reasoning is foolish and illogical. It somehow neglects the reality that oceans are not perfect buffers and that viruses, unchecked, will find a way to cross them.

And it reveals a certain international elitism that is not only disturbing but dangerous.

As the World Health Organization’s director general, Dr. Margaret Chan, recently pointed out: “The outbreak spotlights the dangers of the world’s growing social and economic inequalities. The rich get the best care. The poor are left to die.”

Chan also pointed out that there is no vaccine or cure for Ebola — some 40 years after it emerged — in part because “Ebola has been, historically, geographically confined to poor African nations.”

Ebola, like many other diseases, preys on the poor — poor countries and poor populations.

And, on the domestic front, it must not go unmentioned that elections are fast approaching and that politicians are acting — directly or not — out of political self-interest.

In that way, the federal response to Ebola becomes just another opportunity to argue that the federal government is ineffectual, incompetent and out of its depth, particularly under this president. And, in an election year, appearing to be more aggressive than the federal government, while riding a wave of fear, is appealing.

According to a report issued last week by the Pew Research Center, a sizable minority is concerned that Ebola will affect their families. The poll found that “41 percent are worried that they themselves or someone in their family will be exposed to the virus, including 17 percent who say they are very worried.”

Fear has become — and to some degree, has always been — a highly exploitable commodity in the political and media marketplaces. Both profit from public anxiety.

Christie, working feverishly to erase the memories of closed bridges and burned ones, has become the face of the politicians with hard head and heavy hands seeking hefty political reward from leveraging that fear.

He says his quarantine policy is just “common sense.” That’s just nonsense.

And a bald faced political ploy.  Now here’s Ms. Collins:

By now, I’m sure you’re asking yourself: If the Republicans take control of the Senate in next week’s elections, what would it mean to me?

Excellent question!

“We’ll get things done, and it means a stop to the Obama agenda,” said the embattled Senator Pat Roberts, Republican of Kansas. Did you notice that “get things done” is immediately followed by “stop?” What do you think that means?

Well, we know that if the Republicans win the majority, all Senate committees would have Republican chairs. The Energy Committee, for instance, might be run by Lisa Murkowski of Alaska, a moderate who is in the pocket of oil and gas lobbies. This would be a dramatic change from the current situation in which the Energy Committee is run by Mary Landrieu of Louisiana, a moderate who is in the pocket of oil and gas lobbies.

On a far more exciting note, the Environment Committee could wind up being led by James Inhofe, the author of “The Greatest Hoax: How the Global Warming Conspiracy Threatens Your Future.”

Under the Republicans, the Senate would be an extremely open body, in which the minority party would be permitted — nay, welcomed — to submit clever amendments designed to make the majority take difficult or embarrassing votes that could be used against them in the next election. The minority leader, Mitch McConnell, has complained about the Democrats’ heavy-handedness on this for years and will undoubtedly be eager to change things if he gets in control.

And what about substance? Republican voters would have every reason to expect that the first item on McConnell’s agenda would be repeal of Obamacare. But many Republican senators have positions on the Affordable Care Act that are nuanced in the extreme. Get rid of the program but keep the part about people with pre-existing conditions. Or the bit that lets young adults stay on their parents’ policies. McConnell himself has said that he wants to let his home state of Kentucky keep its extremely popular version of the program, which is known as Kynect. (“The website can continue, but in my view the best interests of the country would be achieved by pulling out Obamacare root and branch.”)

We look forward to seeing that legislation.

Cynical minds might presume that, with a Republican majority, the Senate would simply continue in its current state of dysfunction, working diligently on an agenda (defund Planned Parenthood, strangle the Environmental Protection Agency in its crib) that will die for lack of 60 votes. Democrats, meanwhile, would fall back in love with the filibuster.

Or maybe not. Some people believe that the Republicans would be eager to prove that they really, actually, genuinely can get things done and would work with the White House on matters of common interest, like tax reform.

“Tax reform” would probably mean lowering some rates and making up for the lost revenue by closing tax loopholes elsewhere. The House Ways and Means Committee did some work on that recently, and the committee chairman actually unveiled a plan. Then John Boehner made fun of him. The plan never came up for a vote. The chairman is retiring.

There are a few matters in which a Republican Senate majority would make a critical difference. One is the budget. This is stupendously important, but since we may have to spend the next two years discussing fiscal cliffs and the rules of reconciliation, it doesn’t seem fair to make us start early.

Also, there’s the matter of presidential nominations. “Two words: Supreme Court,” said Chuck Schumer, the third-ranking Senate Democrat. “If they have the majority, they have far more say over who’s the nominee.”

That could have an impact for decades to come. However, it presupposes that there will be a Supreme Court vacancy. On the plus side, the next two years will be a boom time for prayers for the good health of Ruth Bader Ginsburg.

Presuming the current justices continue in good form, the Republicans could still block other presidential nominations and we would have to get used to government by acting-heads-of. But that’s already pretty close to the norm. One Republican representative recently denounced President Obama for creating an Ebola czar instead of giving the job to the surgeon general, apparently unaware that we have had no surgeon general for more than a year, thanks to the National Rifle Association’s opposition to the administration’s nominee for the job.

Tracked down by The Huffington Post, Representative Jason Chaffetz of Utah claimed he really did know the surgeon general’s post was vacant, and that anybody from the office could still do the Ebola job. “I know there’s some confusion there, but I don’t think I was confused,” he said stoutly.

See, Representative Jason Chaffetz is perfectly willing to live with an acting surgeon general. And maybe someone could talk Eric Holder into hanging around for a while longer.

Krugman’s blog, 10/28/14

October 29, 2014

There were two posts yesterday.  The first was “Notes on Japan:”

I’m going to Japan soon, and have been putting some numbers and thoughts together, both about Abenomics and the longer-term lessons from the Japanese experience. Here are some notes on the way.

First, can we stop writing articles wondering whether Europe or the United States might have a Japanese-type lost decade? At this point the question should be whether there is any realistic possibility that we won’t. Both the US and Europe are approaching the 7th anniversary of the start of their respective Great Recessions; the US is far from fully recovered, and Europe not recovered at all. Japan is no longer a cautionary tale; in fact, in terms of human welfare it’s closer to a role model, having avoided much of the suffering the West has imposed on its citizens.

Part of the impression that Japan has been a bigger disaster comes, of course, from Japanese demography: if you look at total GDP, or even GDP per capita, you miss the fact that Japan’s working-age population has been declining since 1997. I’ve tried to update the numbers on real GDP per working-age adult, defined as 15-64; I start in 1993 because of annoying data problems, but it would look similar if I took it back a few more years. Here’s a comparison of the euro area, the US, and Japan:

So even in growth terms Japan doesn’t look much worse than the US at this point, and is actually slightly ahead of the euro area. That doesn’t mean Japan did OK; it just means that we’ve done terribly.

What about Abenomics? The decision to go ahead with the consumption tax increase — which some of us pleaded with them not to do — dealt a serious blow to the plan’s momentum. There has been some recovery in growth:

But losing momentum is a really bad thing here, since the whole point is to break deflationary expectations and get self-sustaining expectations of moderate inflation instead. For what it’s worth, the indicator of expected inflation I suggested, using US TIPS, interest differentials, and reversion to long-run purchasing power parity, is holding up:

But I still worry that Japan may fall into the timidity trap.

The whole business with the consumption tax drives home a point a number of people have made: the conventional view that short-term stimulus must be coupled with action to produce medium-term fiscal stability sounds prudent, but has proved disastrous in practice. In the US context it means that any effort to help the economy now gets tied up in the underlying battle over the future of the welfare state, which means that nothing happens. But even where that isn’t true, talking about fiscal sustainability when deflationary pressure is the clear and present danger distracts policy from immediate needs, and can all too easily lead to counterproductive moves — as just happened in Japan. When I see, say, the IMF inserting into its latest Japan survey (pdf) a section titled “Maintaining focus on fiscal sustainability” my heart sinks (and so, maybe, does Abenomics); it’s hard to argue against sustainability, but under current conditions it means taking your eye off the ball, and Japan really, really can’t afford to do that.

More notes as my cramming for the coming quiz continues.

Yesterday’s second post was “Lars Svensson 1, Sadomonetarists 0:”

Fast FT tells it like it is.

But it does tell you something about the difficulty of making use of good economics and good economists. The Riksbank takes on as deputy governor one of the world’s leading experts on precisely the monetary conditions the world now faces; what’s more, he and those of similar views have seen many of those views — views that many people found implausible — vindicated by events since the financial crisis, in what amounts to a remarkable success for economic analysis. And yet the rest of the Riksbank brushes aside everything he says, going instead for gut feelings and sadomonetarist cliches.

And of course Lars was completely right — but the damage may be irreversible.

Friedman and Bruni

October 29, 2014

In “ISIS and Vietnam” The Moustache of Wisdom says there are parallels between the war in Vietnam and the conflict now in Iraq and Syria that haven’t been fully explored.  Mr. Bruni, in “Toward Better Teachers,” tells us that in a new book and an interview, the former head of the nation’s largest school system confronts teacher performance.  In the comments “JKile” from White Haven, PA had this to say:  “You, Mr. Bruni, like most who write about education, have no clue.”  Here’s The Moustache of Wisdom:

In May, I visited Vietnam and met with university students. After a week of being love-bombed by Vietnamese, who told me how much they admire America, want to work or study there and have friends and family living there, I couldn’t help but ask myself: “How did we get this country so wrong? How did we end up in a war with Vietnam that cost so many lives and drove them into the arms of their most hated enemy, China?”

It’s a long, complicated story, I know, but a big part of it was failing to understand that the core political drama of Vietnam was an indigenous nationalist struggle against colonial rule — not the embrace of global communism, the interpretation we imposed on it.

The North Vietnamese were both communists and nationalists — and still are. But the key reason we failed in Vietnam was that the communists managed to harness the Vietnamese nationalist narrative much more effectively than our South Vietnamese allies, who were too often seen as corrupt or illegitimate. The North Vietnamese managed to win (with the help of brutal coercion) more Vietnamese support not because most Vietnamese bought into Marx and Lenin, but because Ho Chi Minh and his communist comrades were perceived to be the more authentic nationalists.

 I believe something loosely akin to this is afoot in Iraq. The Islamic State, or ISIS, with its small core of jihadists, was able to seize so much non-jihadist Sunni territory in Syria and Iraq almost overnight — not because most Iraqi and Syrian Sunnis suddenly bought into the Islamist narrative of ISIS’s self-appointed caliph. Most Iraqi and Syrian Sunnis don’t want to marry off their daughters to a bearded Chechen fanatic, and more than a few of them pray five times a day and like to wash it down with a good Scotch. They have embraced or resigned themselves to ISIS because they were systematically abused by the pro-Shiite, pro-Iranian regime of Bashar al-Assad in Syria and Prime Minister Nuri Kamal al-Maliki in Iraq — and because they see ISIS as a vehicle to revive Sunni nationalism and end Shiite oppression.

The challenge the U.S. faces in Iraq is trying to defeat ISIS in tacit alliance with Syria and Iran, whose local Shiite allies are doing a lot of the fighting in Iraq and Syria. Iran is seen by many Syrian and Iraqi Sunnis as the “colonial power” dominating Iraq to keep it weak.

Obsessed with communism, America intervened in Vietnam’s civil war and took the place of the French colonialists. Obsessed with jihadism and 9/11, are we now doing the bidding of Iran and Syria in Iraq? Is jihadism to Sunni nationalism what communism was to Vietnamese nationalism: a fearsome ideological movement that triggers emotional reactions in the West — deliberately reinforced with videotaped beheadings — but that masks a deeper underlying nationalist movement that is to some degree legitimate and popular in its context?

I wonder what would have happened had ISIS not engaged in barbarism and declared: “We are the Islamic State. We represent the interests of Syrian and Iraqi Sunnis who have been brutalized by Persian-directed regimes of Damascus and Baghdad. If you think we’re murderous, then just Google ‘Bashar al-Assad and barrel bombs’ or ‘Iraqi Shiite militias and the use of power drills to kill Sunnis.’ You’ll see what we faced after you Americans left. Our goal is to secure the interests of Sunnis in Iraq and Syria. We want an autonomous ‘Sunnistan’ in Iraq just like the Kurds have a Kurdistan — with our own cut of Iraq’s oil wealth.”

That probably would have garnered huge support from Sunnis everywhere. ISIS’s magazine, Dabiq, recently published an article, “Reflections on the Final Crusade,” (transcribed by the Middle East Media Research Institute), which argued that America’s war on ISIS only serves the interests of America’s enemies: Iran and Russia. It quotes U.S. strategists as warning that Iran has created a “Shia-belt from Tehran through Baghdad to Beirut,” a threat much greater than ISIS.

Then why did ISIS behead two American journalists? Because ISIS is a coalition of foreign jihadists, local Sunni tribes and former Iraqi Baath Party military officers. I suspect the jihadists in charge want to draw the U.S. into another “crusade” against Muslims — just like Osama bin Laden — to energize and attract Muslims from across the world and to overcome their main weakness, namely that most Iraqi and Syrian Sunnis are attracted to ISIS simply as a vehicle of their sectarian resurgence, not because they want puritanical/jihadist Islam. There is no better way to get secular Iraqi and Syrian Sunnis to fuse with ISIS than have America bomb them all.

ISIS needs to be contained before it destabilizes islands of decency like Jordan, Kurdistan and Lebanon. But destroying it? That will be hard, because it’s not just riding on some jihadist caliphate fantasy, but also on deep Sunni nationalist grievances. Separating the two is the best way to defeat ISIS, but the only way to separate mainstream Sunnis from jihadists is for mainstream Sunnis and Shiites to share power, to build a healthy interdependency from what is now an unhealthy one. Chances of that? Very low. I hope President Obama has thought this through.

Now we get to Mr. Bruni:

More than halfway through Joel Klein’s forthcoming book on his time as the chancellor of New York City’s public schools, he zeros in on what he calls “the biggest factor in the education equation.”

It’s not classroom size, school choice or the Common Core.

It’s “teacher quality,” he writes, adding that “a great teacher can rescue a child from a life of struggle.”

We keep coming back to this. As we wrestle with the urgent, dire need to improve education — for the sake of social mobility, for the sake of our economic standing in the world — the performance of teachers inevitably draws increased scrutiny. But it remains one of the trickiest subjects to broach, a minefield of hurt feelings and vested interests.

Klein knows the minefield better than most. As chancellor from the summer of 2002 through the end of 2010, he oversaw the largest public school system in the country, and did so for longer than any other New York schools chief in half a century.

That gives him a vantage point on public education that would be foolish to ignore, and in “Lessons of Hope: How to Fix Our Schools,” which will be published next week, he reflects on what he learned and what he believes, including that poor parents, like rich ones, deserve options for their kids; that smaller schools work better than larger ones in poor communities; and that an impulse to make kids feel good sometimes gets in the way of giving them the knowledge and tools necessary for success.

I was most struck, though, by what he observes about teachers and teaching.

Because of union contracts and tenure protections in place when he began the job, it was “virtually impossible to remove a teacher charged with incompetence,” he writes. Firing a teacher “took an average of almost two and a half years and cost the city over $300,000.”

And the city, like the rest of the country, wasn’t (and still isn’t) managing to lure enough of the best and brightest college graduates into classrooms. “In the 1990s, college graduates who became elementary-school teachers in America averaged below 1,000 points, out of a total of 1,600, on the math and verbal Scholastic Aptitude Tests,” he writes. In New York, he notes, “the citywide average for all teachers was about 970.”

In an interview with him after I finished the book, I asked for a short list of measures that might improve teacher quality.

He said that schools of education could stiffen their selection criteria in a way that raises the bar for who goes into teaching and elevates the public perception of teachers. “You’d have to do it over the course of several years,” he said. But if implemented correctly, he said, it would draw more, not fewer, people into teaching.

He said the curriculum at education schools should be revisited as well. There’s a growing chorus for this; it’s addressed in the recent best seller “Building a Better Teacher,” by Elizabeth Green. But while Green homes in on the teaching of teaching, Klein stressed to me that teachers must acquire mastery of the actual subject matter they’re dealing with. Too frequently they don’t.

Klein urged “a rational incentive system” that doesn’t currently exist in most districts. He’d like to see teachers paid more for working in schools with “high-needs” students and for tackling subjects that require additional expertise. “If you have to pay science and physical education teachers the same, you’re going to end up with more physical education teachers,” he said. “The pay structure is irrational.”

In an ideal revision of it, he added, there would be “some kind of pay for performance, rewarding success.” Salaries wouldn’t be based primarily on seniority.

Such challenges of the status quo aren’t welcomed by many teachers and their unions. Just look at their fury about a Time magazine cover story last week that reported — accurately — on increasingly forceful challenges to traditional tenure protections. They hear most talk about tenure and teacher quality as an out-and-out attack, a failure to appreciate all the obstacles that they’re up against. They hear phrases like “rescue a child from a life of struggle” and rightly wonder if that, ultimately, is their responsibility.

It isn’t. But it does happen to be a transformative opportunity that they, like few other professionals, have. In light of that, we owe them, as a group, more support in terms of salary, more gratitude for their efforts and outright reverence when they succeed.

But they owe us a discussion about education that fully acknowledges the existence of too many underperformers in their ranks. Klein and others who bring that up aren’t trying to insult or demonize them. They’re trying to team up with them on a project that matters more than any other: a better future for kids.

Krugman’s blog, 10/27/14

October 28, 2014

There were four posts yesterday.  The first was “What Secular Stagnation Isn’t:”

Et tu, Gavyn? In the course of an interesting piece suggesting that there has been a sustained slowdown in the trend rate of growth, Gavyn Davies declares that

Some version of secular stagnation does seem to be taking hold.

He later acknowledges that there are different meanings assigned to the term; but it’s really important not to feed the confusion. To the extent that secular stagnation is an important and perhaps shocking concept, it really has to be distinguished from the proposition that potential growth is slowing down. What I wrote:

For those new to or confused by the term, secular stagnation is the claim that underlying changes in the economy, such as slowing growth in the working-age population, have made episodes like the past five years in Europe and the US, and the last 20 years in Japan, likely to happen often. That is, we will often find ourselves facing persistent shortfalls of demand, which can’t be overcome even with near-zero interest rates.

Secular stagnation is not the same thing as the argument, associated in particular with Bob Gordon (who’s also in the book), that the growth of economic potential is slowing, although slowing potential might contribute to secular stagnation by reducing investment demand. It’s a demand-side, not a supply-side concept. And it has some seriously unconventional implications for policy.

This is a really important distinction, because secular stagnation and a supply-side growth slowdown have completely different policy implications. In fact, in some ways the morals are almost opposite.

If labor force growth and productivity growth are falling, the indicated response is (a) see if there are ways to increase efficiency and (b) if there aren’t, live within your reduced means. A growth slowdown from the supply side is, roughly speaking, a reason to look favorably on structural reform and austerity.

But if we have a persistent shortfall in demand, what we need is measures to boost spending — higher inflation, maybe sustained spending on public works (and less concern about debt because interest rates will be low for a long time).

So please, let’s not confuse these issues. This isn’t some academic quibble; we’re trying to understand what ails us, and saying that high blood pressure and low blood pressure are more or less the same thing is not at all helpful.

The second post yesterday was “When Banks Aren’t the Problem:”

OK, an admission: Sometimes it seems to me as if economists and policymakers have spent much of the past six years slowly, stumblingly figuring out stuff they would already have known if they had read my 1998 Brookings Paper (pdf) on Japan’s liquidity trap. For example, there’s been huge confusion about whether Ricardian equivalence makes fiscal policy ineffective, vast amazement that increases in the monetary base haven’t led to big increases in the broader money supply or inflation; yet that was all clear 16 years ago, once you thought hard about the Japanese trap.

And now here we go with another: the role of troubled banks. Europe has done its stress tests, which aren’t too bad; but now we’re getting worried commentary that maybe, just maybe, a clean bill of banking health won’t stop the slide into deflation.

Folks, we’ve been there; in the 90s it was conventional wisdom that Japan’s zombie banks were the problem, and that once they were fixed all would be well. But I took a hard look at the logic and evidence for that proposition (pp. 174-177), and it just didn’t hold up.

I know, I know — blowing my own horn, and all that. But if I am not for myself, who will be for me? And in any case, it has been really frustrating to watch so many people reinvent fallacies that were thoroughly refuted long ago.

Oh, and if people had read my old stuff they might have managed to avoid embarrassing themselves so much in open letters to Bernanke and suchlike.

Yesterday’s third post was “ACA OK:”

The Times has a very nice survey of the results to date of the Affordable Care Act, aka Obamacare, aka death panels and the moral equivalent of slavery.

The verdict: It’s going well. A big expansion in coverage, which is affordable for a large majority; the main exceptions seem to be people who went for the minimum coverage allowed, keeping premiums down but leaving large co-payments. None of the predictions of disaster has come even slightly true.

The last post yesterday was “Open Letters of 1933:”

My friend and old classmate Irwin Collier, of the Free University of Berlin, sends me to an open letter to monetary officials warning of the dangers of printing money and debasing the dollar, claiming that these policies will undermine confidence and threaten to create a renewed financial crisis. But it’s not the famous 2010 letter to Ben Bernanke, whose signatories refuse to admit that they were wrong; it’s a letter sent by Columbia economists in 1933 (pdf):

It’s all there: decrying inflation amid deflation, invocation of “confidence”, a chin-stroking pose of being responsible while urging policies that would perpetuate depression. Those who refuse to learn from the past are condemned to repeat it.

Brooks, Cohen and Nocera

October 28, 2014

Oh, cripes.  In a spectacular, flaming pile of turds called “Why Partyism Is Wrong” Bobo actually says that political discrimination is more prevalent than you would imagine, and its harmful effects haven’t been fully considered.  The hypocrisy is mind-boggling…  Mr. Cohen, in “A Climate of Fear,” says we have the remorse of Pandora, and that the technological spirit we have let slip from the box has turned into a monster.  Mr. Nocera asks a question:  “Are Our Courts For Sale?”  (Joe, everything in this nation is now officially for sale…) He says in the post-Citizens United political system, ads are affecting judges and becoming corrosive to the rule of law.  No shit, really?  Who’da thunk it?  Here’s Bobo’s flaming bag of dog poop:

A college student came to me recently with a quandary. He’d spent the summer interning at a conservative think tank. Now he was applying to schools and companies where most people were liberal. Should he remove the internship from his résumé?

I advised him not to. Even if people disagreed with his politics, I argued, they’d still appreciate his public spiritedness. But now I’m thinking that advice was wrong. There’s a lot more political discrimination than I thought. In fact, the best recent research suggests that there’s more political discrimination than there is racial discrimination.

For example, political scientists Shanto Iyengar and Sean Westwood gave 1,000 people student résumés and asked them which students should get scholarships. The résumés had some racial cues (membership in African-American Students Association) and some political cues (member of Young Republicans).

Race influenced decisions. Blacks favored black students 73 percent to 27 percent, and whites favored black students slightly. But political cues were more powerful. Both Democrats and Republicans favored students who agreed with them 80 percent of the time. They favored students from their party even when other students had better credentials.

Iyengar and Westwood conducted other experiments to measure what Cass Sunstein of Harvard Law School calls “partyism.” They gave subjects implicit association tests, which measure whether people associate different qualities with positive or negative emotions. They had people play the trust game, which measures how much people are willing to trust different kinds of people.

In those situations, they found pervasive prejudice. And political biases were stronger than their racial biases.

In a Bloomberg View column last month, Sunstein pointed to polling data that captured the same phenomenon. In 1960, roughly 5 percent of Republicans and Democrats said they’d be “displeased” if their child married someone from the other party. By 2010, 49 percent of Republicans and 33 percent of Democrats said they would mind.

Politics is obviously a passionate activity, in which moral values clash. Debates over Obamacare, charter schools or whether the United States should intervene in Syria stir serious disagreement. But these studies are measuring something different. People’s essential worth is being measured by a political label: whether they should be hired, married, trusted or discriminated against.

The broad social phenomenon is that as personal life is being de-moralized, political life is being hyper-moralized. People are less judgmental about different lifestyles, but they are more judgmental about policy labels.

The features of the hyper-moralized mind-set are all around. More people are building their communal and social identities around political labels. Your political label becomes the prerequisite for membership in your social set.

Politics becomes a marker for basic decency. Those who are not members of the right party are deemed to lack basic compassion, or basic loyalty to country.

Finally, political issues are no longer just about themselves; they are symbols of worth and dignity. When many rural people defend gun rights, they’re defending the dignity and respect of rural values against urban snobbery.

There are several reasons politics has become hyper-moralized in this way. First, straight moral discussion has atrophied. There used to be public theologians and philosophers who discussed moral issues directly. That kind of public intellectual is no longer prominent, so moral discussion is now done under the guise of policy disagreement, often by political talk-show hosts.

Second, highly educated people are more likely to define themselves by what they believe than by their family religion, ethnic identity or region.

Third, political campaigns and media provocateurs build loyalty by spreading the message that electoral disputes are not about whether the top tax rate will be 36 percent or 39 percent, but are about the existential fabric of life itself.

The problem is that hyper-moralization destroys politics. Most of the time, politics is a battle between competing interests or an attempt to balance partial truths. But in this fervent state, it turns into a Manichaean struggle of light and darkness. To compromise is to betray your very identity. When schools, community groups and workplaces get defined by political membership, when speakers get disinvited from campus because they are beyond the pale, then every community gets dumber because they can’t reap the benefits of diverging viewpoints and competing thought.

This mentality also ruins human interaction. There is a tremendous variety of human beings within each political party. To judge human beings on political labels is to deny and ignore what is most important about them. It is to profoundly devalue them. That is the core sin of prejudice, whether it is racism or partyism.

The personal is not political. If you’re judging a potential daughter-in-law on political grounds, your values are out of whack.

Well, if she supports the teatards it probably means she’s a narrow minded little bigot, which are not values I support…  Next up we have Mr. Cohen:

I don’t know about you, but I find dinner conversations often veer in strange directions these days, like the friend telling me the other evening that the terrorists calling themselves Islamic State could easily dispatch one of their own to West Africa, make sure he contracts Ebola, then get him onto the London Underground or the Paris Metro or the New York subway, squeezed up against plenty of other folk at rush hour, and bingo!

“I mean,” he said, “I can’t possibly be the first to have thought of this. It’s easy. They want to commit suicide anyway, right?”

Right: We are vulnerable, less safe than we thought.

A mouthful of pasta and on he went about how the time has come to blow up the entire Middle East, it’s done for, finished; and how crazy the energy market is right now with the Saudis trying to drive down prices in order to make costly American shale oil production less viable, which in turn should ensure the United States continues to buy Saudi crude even now that it has become the world’s largest oil producer.

But of course the Russians are not happy about cheap oil, nor are the Iranians, and the bottom line is it’s chaos out there, sharks devouring one another. Nothing happens by chance, certainly not a 25 percent drop in oil prices. Somebody would pay for this plot.

Not so long ago, I struggled to remind myself, this guy was brimming over with idealism, throwing in a big investment-banking job to go to the Middle East and invest his energies in democratic change, a free press, a new order, bending my ear about how the time had come for the region and his country in particular to join the modern world. Nothing in the Arab genome condemned the region to backwardness, violence and paranoia. His belief was fervid. It was married to deeds. He walked the walk for change. I was full of admiration.

Then a shadow fell over the world: annexations, beheadings, pestilence, Syria, Gaza and the return of the Middle Eastern strongmen. Hope gave way to fever. When Canada is no longer reassuring, it’s all over.

We are vulnerable and we are fearful. That is the new zeitgeist, at least in the West. Fanaticism feeds on frustration; and frustration is widespread because life for many is not getting better. People fret.

Come to think of it, our conversation was not encrypted. How foolish, anybody could be listening in, vacuuming my friend’s dark imaginings into some data-storage depot in the American desert, to be sifted through by a bunch of spooks who could likely hack into his phone or drum up some charge of plotting against the West by having ideas about the propagation of Ebola. Even the healers are being humiliated and quarantined, punished for their generous humanity, while the humanoid big-data geeks get soda, steak and a condo in Nevada.

There were cameras and listening devices everywhere. Just look up, look around. It was a mistake to say anything within range of your phone. Lots of people were vulnerable. Anyone could hack into the software in your car, or the drip at your hospital bed, and make a mess of you.

What has happened? Why this shadow over the dinner table and such strange fears? It seems we have the remorse of Pandora. The empowering, all-opening, all-devouring technological spirit we have let slip from the box has turned into a monster, giving the killers-for-a-caliphate new powers to recruit, the dictators new means to repress, the spies new means to listen in, the fear mongers new means to spread alarm, the rich new means to get richer at the expense of the middle class, the marketers new means to numb, the tax evaders new means to evade, viruses new means to spread, devices new means to obsess, the rising powers new means to block the war-weary risen, and anxiety new means to inhabit the psyche.

Hyper-connection equals isolation after all. What a strange trick, almost funny. The crisis, Antonio Gramsci noted in the long-ago 20th century, “consists precisely in the fact that the old is dying and the new cannot be born.” Many people I talk to, and not only over dinner, have never previously felt so uneasy about the state of the world. There is something in the air, fin-de-siècle Vienna with Twitter.

Hope, of course, was the one spirit left behind in Pandora’s Box. One of the things in the air of late was a Google executive dropping to earth from the stratosphere, a fall of 135,890 feet, plummeting at speeds of up to 822 miles per hour, and all smiles after his 25-mile tumble. Technology is also liberation. It just doesn’t feel that way right now. The search is on for someone to dispel foreboding and embody, again, the hope of the world.

And now we get to Mr. Nocera:

One of the most shocking ads aired this political season was aimed at a woman named Robin Hudson.

Hudson, 62, is not a congressional or Senate candidate. Rather, she is a State Supreme Court justice in North Carolina, seeking her second eight-year term. It wasn’t all that long ago when, in North Carolina, judicial races were publicly financed. If a candidate spent more than $100,000, it was unusual. Ads mainly consisted of judicial candidates promising to be fair. Any money the candidates raised was almost entirely local.

This ad in North Carolina, however, which aired during the primary season, was a startling departure. First, the money came from an organization called Justice for All NC — which, in turn, was funded primarily by the Republican State Leadership Committee. That is to say, it was the kind of post-Citizens United money that has flooded the political system and polluted our politics.

And then there was its substance. “We want judges to protect us,” the ad began. The voice-over went on to say that when child molesters sued to stop electronic monitoring, Judge Hudson had “sided with the predators.” It was a classic attack ad.

Not surprisingly, the truth was a bit different. In 2010, the State Supreme Court was asked to rule on whether an electronic-monitoring law could apply to those who had been convicted before it passed. Hudson, in a dissent, wrote that the law could not be applied retroactively.

As it turns out, the ad probably backfired. “It clearly exceeded all bounds of propriety and accuracy,” said Robert Orr, a former North Carolina Supreme Court justice. Hudson won her primary and has a good chance of retaining her seat in the election next week.

But her experience is being replicated in many of the 38 states that hold some form of judicial elections. “We are seeing money records broken all over the country,” said Bert Brandenburg, the executive director of Justice at Stake, which tracks money in judicial elections. “Right now, we are watching big money being spent in Michigan. We are seeing the same thing in Montana and Ohio. There is even money going into a district court race in Missouri.” He added, “This is the new normal.”

To be sure, the definition of big money in a judicial election is a lot different than big money in a hotly contested Senate race. According to Alicia Bannon at the Brennan Center for Justice at New York University School of Law, a total of $38.7 million was spent on judicial elections in 2009-10. During the next election cycle, the total rose to $56.4 million.

But that is partly the point. “With a relatively small investment, interest groups have opportunities to shape state courts,” said Bannon. Sure enough, that is exactly what seems to be going on. Americans for Prosperity, financed by the Koch brothers, has been involved in races in Tennessee and Montana, according to Brandenburg. And the Republican State Leadership Committee started something this year called the Judicial Fairness Initiative, which supports conservative candidates.

In that district court race in Missouri, for instance, Judge Pat Joyce, a 20-year judicial veteran, has been accused in attack ads bought by the Republican State Leadership Committee as being a liberal. (“Radical environmentalists think Joyce is so groovy,” says one ad.) Republicans are spending $100,000 on attack ads and have given another $100,000 to her opponent, a man whose campaign was nearly $13,000 in debt before the Republican money showed up.

It should be obvious why this is a problem. Judges need to be impartial, and that is harder when they have to raise a lot of money from people who are likely to appear before them in court — in order to compete with independent campaign expenditures. An influx of independent campaign money aimed at one judge can also serve as a warning shot to other judges that they’ll face the same opposition if their rulings aren’t conservative enough. Most of all, it is terribly corrosive to the rule of law if people don’t believe in the essential fairness of judges.

Yet there seems to be little doubt that the need to raise money does, in fact, affect judges. Joanna Shepherd, a professor at Emory Law, conducted an empirical study that tried to determine whether television attack ads were causing judges to rule against criminal defendants more often. (Most attack ads revolve around criminal cases.) She found, as she wrote in a report entitled “Skewed Justice,” that “the more TV ads aired during state supreme court judicial elections in a state, the less likely justices are to vote in favor of criminal defendants.”

“There are two hypotheses,” she told me when I called to ask her about the study. “Either judges are fearful of making rulings that provide fodder for the ads. Or the TV ads are working and helping get certain judges elected.”

“Either way,” she concluded, “outcomes are changing.”

Krugman, solo

October 27, 2014

Mr. Blow is still off, so Prof. Krugman has the place to himself.  Today, in “Ideology and Investment,” he tells us why America won’t build infrastructure.  Here he is:

America used to be a country that built for the future. Sometimes the government built directly: Public projects, from the Erie Canal to the Interstate Highway System, provided the backbone for economic growth. Sometimes it provided incentives to the private sector, like land grants to spur railroad construction. Either way, there was broad support for spending that would make us richer.

But nowadays we simply won’t invest, even when the need is obvious and the timing couldn’t be better. And don’t tell me that the problem is “political dysfunction” or some other weasel phrase that diffuses the blame. Our inability to invest doesn’t reflect something wrong with “Washington”; it reflects the destructive ideology that has taken over the Republican Party.

Some background: More than seven years have passed since the housing bubble burst, and ever since, America has been awash in savings — or more accurately, desired savings — with nowhere to go. Borrowing to buy homes has recovered a bit, but remains low. Corporations are earning huge profits, but are reluctant to invest in the face of weak consumer demand, so they’re accumulating cash or buying back their own stock. Banks are holding almost $2.7 trillion in excess reserves — funds they could lend out, but choose instead to leave idle.

And the mismatch between desired saving and the willingness to invest has kept the economy depressed. Remember, your spending is my income and my spending is your income, so if everyone tries to spend less at the same time, everyone’s income falls.

There’s an obvious policy response to this situation: public investment. We have huge infrastructure needs, especially in water and transportation, and the federal government can borrow incredibly cheaply — in fact, interest rates on inflation-protected bonds have been negative much of the time (they’re currently just 0.4 percent). So borrowing to build roads, repair sewers and more seems like a no-brainer. But what has actually happened is the reverse. After briefly rising after the Obama stimulus went into effect, public construction spending has plunged. Why?

In a direct sense, much of the fall in public investment reflects the fiscal troubles of state and local governments, which account for the great bulk of public investment.

These governments generally must, by law, balance their budgets, but they saw revenues plunge and some expenses rise in a depressed economy. So they delayed or canceled a lot of construction to save cash.

Yet this didn’t have to happen. The federal government could easily have provided aid to the states to help them spend — in fact, the stimulus bill included such aid, which was one main reason public investment briefly increased. But once the G.O.P. took control of the House, any chance of more money for infrastructure vanished. Once in a while Republicans would talk about wanting to spend more, but they blocked every Obama administration initiative.

And it’s all about ideology, an overwhelming hostility to government spending of any kind. This hostility began as an attack on social programs, especially those that aid the poor, but over time it has broadened into opposition to any kind of spending, no matter how necessary and no matter what the state of the economy.

You can get a sense of this ideology at work in some of the documents produced by House Republicans under the leadership of Paul Ryan, the chairman of the Budget Committee. For example, a 2011 manifesto titled “Spend Less, Owe Less, Grow the Economy” called for sharp spending cuts even in the face of high unemployment, and dismissed as “Keynesian” the notion that “decreasing government outlays for infrastructure lessens government investment.” (I thought that was just arithmetic, but what do I know?) Or take a Wall Street Journal editorial from the same year titled “The Great Misallocators,” asserting that any money the government spends diverts resources away from the private sector, which would always make better use of those resources.

Never mind that the economic models underlying such assertions have failed dramatically in practice, that the people who say such things have been predicting runaway inflation and soaring interest rates year after year and keep being wrong; these aren’t the kind of people who reconsider their views in the light of evidence. Never mind the obvious point that the private sector doesn’t and won’t supply most kinds of infrastructure, from local roads to sewer systems; such distinctions have been lost amid the chants of private sector good, government bad.

And the result, as I said, is that America has turned its back on its own history. We need public investment; at a time of very low interest rates, we could easily afford it. But build we won’t.

Krugman’s blog, 10/25/14

October 26, 2014

There was one post yesterday, “Notes on Easy Money and Inequality:”

I’ve received some angry mail over this William Cohan piece attacking Janet Yellen for supposedly feeding inequality through quantitative easing; Cohan and my correspondents take this inequality-easy money story as an established fact, and accuse anyone who supports the Fed’s policy while also decrying inequality as a hypocrite if not a lackey of Wall Street.

All this presumes, however, that Cohan knows whereof he speaks. Actually, his biggest complaint about easy money is mostly a red herring, and the overall story about QE and inequality is not at all clear.

Let’s start with the complaint that forms the heart of many attacks on QE: the harm done to people trying to live off the interest income on their savings. There’s no question that such people exist, and that in general low interest rates on deposits hurt people who don’t own other financial assets. But how big a story is it?

Let’s turn to the Survey of Consumer Finances (pdf), which has information on dividend and interest income by wealth class:

 

The bottom three-quarters of the wealth distribution basically has no investment income. The people in the 75-90 range do have some. But even in 2007, when interest rates were relatively high, it was only 1.9 percent of their total income. By 2010, with rates much lower, this was down to 1.6 percent; maybe it fell a bit more after QE, although QE didn’t have much impact on deposit rates. The point, however, is that the overall impact on the income of middle-income Americans was, necessarily, small; you can’t lose a lot of interest income if there wasn’t much to begin with. If you want to point to individual cases, fine — but the claim that the hit to interest was a major factor depressing incomes at the bottom is just false.

There’s a somewhat different issue involving pensions: as the Bank of England pointed out in a study (pdf) that a lot of Fed-haters have cited but fewer, I suspect, have actually read, easy money has offsetting effects on pension funds: it raise the value of their assets, but reduces the rate of return looking forward. These effects should be roughly a wash if a pension scheme is fully funded, but do hurt if it’s currently underfunded, which many are. So the BoE concludes that easy money has somewhat hurt pensions — but also suggests that the effect is modest.

So where does the impression that QE has involved a massive redistribution to the rich come from? A lot of it, I suspect, comes from the fact that equity prices have surged since 2010 while housing has not — and since middle-class families have a lot of their wealth in houses, this seems highly unequalizing.

Here, however, I think it’s useful to go back to first principles for a second. Do we expect easy money to have differential effects on asset prices? Yes, but mainly having to do with longevity. Values of short-term assets like deposits or, for that matter, software that will soon be obsolete don’t vary much with interest rates; values of long-term assets like housing should vary a lot. Equities are claims on the assets of corporations, which include a mix of short-term stuff like software, long-term stuff like structures, and invisible assets like goodwill and market position that may span the whole range of longevity.

The point is that it’s not at all obvious why housing should be left behind in general by easy money. In fact, one of the dirty little secrets of monetary policy is that it normally works through housing, with little direct impact on business investment.

So why was this time different? Surely the answer is that housing had an immense bubble in the mid-2000s, so that it wasn’t going to come roaring back. Meanwhile, stocks took a huge beating in 2008-9, but this was financial disruption and panic, and they would probably have made a strong comeback even without QE.

If we take a longer-term perspective, you can see that the relationship between monetary policy and stocks versus housing varies a lot. The charts show real stock prices (from Robert Shiller) and real housing prices:

Credit Robert Shiller

 

 

The easy-money policies that followed the bursting of the 90s stock bubble produced a surge in housing prices, not so much in stocks — the opposite of recent years. The point is that a lot depends on the history, and the belief that QE systematically favors the kinds of assets the wealthy own is wrong or at least overstated.

Meanwhile, for most people neither interest rates nor asset prices are key to financial health — instead, it’s all about wages. And new research just posted on Vox, using time-series methods on micro data, finds that

the empirical evidence points toward monetary policy actions affecting inequality in the direction opposite to the one suggested by Ron Paul and the Austrian economists.

Which brings me back to the reason most of us favor QE. No, Janet Yellen and I aren’t secretly on the Goldman Sachs payroll. Nor do I (or, I suspect, Yellen) believe that unconventional monetary policy can produce miracles. The main response to a depressed economy should have been fiscal; the case for a large infrastructure program remains overwhelming.

But given the political realities, that’s not going to happen. The Fed is the only game in town. And you really don’t want to trash the Fed’s efforts without seriously doing your homework.

The Pasty Little Putz, Friedman, Kristof and Bruni

October 26, 2014

The Pasty Little Putz has decided to tell us all about “The Pope and the Precipice.”  He whines that the Catholic Church is inching toward a crisis of faith.  (There’s nothing quite so rabidly hidebound as a convert…)  In the comments “gemli” from Boston points out that “Here we have another pointless tempest in a non-existent teapot.”  In “The Last Train” The Moustache of Wisdom says Israeli, Palestinian and Jordanian environmentalists may have the best model for Middle East peace.  If we didn’t know it already Mr. Kristof tells us that “The American Dream is Leaving America.”  He says fixing the education system is the civil rights challenge of our era, especially as the United States is being eclipsed in economic and educational mobility.  Mr. Bruni takes a look at “Fathers, Sons and the Presidency” and says our country’s history is one of daddy issues. Just look at the last three presidents.  Here’s The Putz:

To grasp why events this month in Rome — publicly feuding cardinals, documents floated and then disavowed — were so remarkable in the context of modern Catholic history, it helps to understand certain practical aspects of the doctrine of papal infallibility.

On paper, that doctrine seems to grant extraordinary power to the pope — since he cannot err, the First Vatican Council declared in 1870, when he “defines a doctrine concerning faith or morals to be held by the whole Church.”

In practice, though, it places profound effective limits on his power.

Those limits are set, in part, by normal human modesty: “I am only infallible if I speak infallibly, but I shall never do that,” John XXIII is reported to have said. But they’re also set by the binding power of existing teaching, which a pope cannot reverse or contradict without proving his own office, well, fallible — effectively dynamiting the very claim to authority on which his decisions rest.

Not surprisingly, then, popes are usually quite careful. On the two modern occasions when a pontiff defined a doctrine of the faith, it was on a subject — the holiness of the Virgin Mary — that few devout Catholics consider controversial. In the last era of major church reform, the Second Vatican Council, the popes were not the intellectual protagonists, and the council’s debates — while vigorous — were steered toward a (pope-approved) consensus: The documents that seemed most like developments in doctrine, on religious liberty and Judaism, passed with less than a hundred dissenting votes out of more than 2,300 cast.

But something very different is happening under Pope Francis. In his public words and gestures, through the men he’s elevated and the debates he’s encouraged, this pope has repeatedly signaled a desire to rethink issues where Catholic teaching is in clear tension with Western social life — sex and marriage, divorce and homosexuality.

And in the synod on the family, which concluded a week ago in Rome, the prelates in charge of the proceedings — men handpicked by the pontiff — formally proposed such a rethinking, issuing a document that suggested both a general shift in the church’s attitude toward nonmarital relationships and a specific change, admitting the divorced-and-remarried to communion, that conflicts sharply with the church’s historic teaching on marriage’s indissolubility.

At which point there was a kind of chaos. Reports from inside the synod have a medieval feel — churchmen berating each other, accusations of manipulation flying, rebellions bubbling up. Outside Catholicism’s doors, the fault lines were laid bare: geographical (Germans versus Africans; Poles versus Italians), generational (a 1970s generation that seeks cultural accommodation and a younger, John Paul II-era that seeks to be countercultural) and theological above all.

In the end, the document’s controversial passages were substantially walked back. But even then, instead of a Vatican II-style consensus, the synod divided, with large numbers voting against even watered-down language around divorce and homosexuality. Some of those votes may have been cast by disappointed progressives. But many others were votes cast, in effect, against the pope.

In the week since, many Catholics have downplayed the starkness of what happened or minimized the papal role. Conservatives have implied that the synod organizers somehow went rogue, that Pope Francis’s own views were not really on the table, that orthodox believers should not be worried. More liberal Catholics have argued that there was no real chaos — this was just the kind of freewheeling, Jesuit-style debate Francis was hoping for — and that the pope certainly suffered no meaningful defeat.

Neither argument is persuasive. Yes, Francis has taken no formal position on the issues currently in play. But all his moves point in a pro-change direction — and it simply defies belief that men appointed by the pope would have proposed departures on controversial issues without a sense that Francis would approve.

If this is so, the synod has to be interpreted as a rebuke of the implied papal position. The pope wishes to take these steps, the synod managers suggested. Given what the church has always taught, many of the synod’s participants replied, he and we cannot.

Over all, that conservative reply has the better of the argument. Not necessarily on every issue: The church’s attitude toward gay Catholics, for instance, has often been far more punitive and hostile than the pastoral approach to heterosexuals living in what the church considers sinful situations, and there are clearly ways that the church can be more understanding of the cross carried by gay Christians.

But going beyond such a welcome to a kind of celebration of the virtues of nonmarital relationships generally, as the synod document seemed to do, might open a divide between formal teaching and real-world practice that’s too wide to be sustained. And on communion for the remarried, the stakes are not debatable at all. The Catholic Church was willing to lose the kingdom of England, and by extension the entire English-speaking world, over the principle that when a first marriage is valid a second is adulterous, a position rooted in the specific words of Jesus of Nazareth. To change on that issue, no matter how it was couched, would not be development; it would be contradiction and reversal.

SUCH a reversal would put the church on the brink of a precipice. Of course it would be welcomed by some progressive Catholics and hailed by the secular press. But it would leave many of the church’s bishops and theologians in an untenable position, and it would sow confusion among the church’s orthodox adherents — encouraging doubt and defections, apocalypticism and paranoia (remember there is another pope still living!) and eventually even a real schism.

Those adherents are, yes, a minority — sometimes a small minority — among self-identified Catholics in the West. But they are the people who have done the most to keep the church vital in an age of institutional decline: who have given their energy and time and money in an era when the church is stained by scandal, who have struggled to raise families and live up to demanding teachings, who have joined the priesthood and religious life in an age when those vocations are not honored as they once were. They have kept the faith amid moral betrayals by their leaders; they do not deserve a theological betrayal.

Which is why this pope has incentives to step back from the brink — as his closing remarks to the synod, which aimed for a middle way between the church’s factions, were perhaps designed to do.

Francis is charismatic, popular, widely beloved. He has, until this point, faced strong criticism only from the church’s traditionalist fringe, and managed to unite most Catholics in admiration for his ministry. There are ways that he can shape the church without calling doctrine into question, and avenues he can explore (annulment reform, in particular) that would bring more people back to the sacraments without a crisis. He can be, as he clearly wishes to be, a progressive pope, a pope of social justice — and he does not have to break the church to do it.

But if he seems to be choosing the more dangerous path — if he moves to reassign potential critics in the hierarchy, if he seems to be stacking the next synod’s ranks with supporters of a sweeping change — then conservative Catholics will need a cleareyed understanding of the situation.

They can certainly persist in the belief that God protects the church from self-contradiction. But they might want to consider the possibility that they have a role to play, and that this pope may be preserved from error only if the church itself resists him.

So I guess Putzy would love it if the Church went back to burning heretics (just call it a jihad) and selling indulgences…  Next up we have The Moustache of Wisdom:

When Secretary of State John Kerry began his high-energy effort to forge an Israeli-Palestinian peace, I argued that it was the last train for a two-state solution. If it didn’t work, it would mean that the top-down, diplomatically constructed two-state concept was over as a way out of that conflict. For Israelis and Palestinians, the next train would be the one coming at them.

Well, now arriving on Track 1 …

That train first appeared in the Gaza war and could soon be rounding the bend in the West Bank. Just last week an East Jerusalem Palestinian killed a 3-month-old Israeli baby and wounded seven others when he deliberately rammed his car into a light rail station.

Can a bigger collision be averted? Not by Washington. It can only come from Israelis and Palestinians acting on their own, directly with one another, with real imagination, to convert what is now an “unhealthy interdependency” into a “healthy interdependency.”

“Never happen!” you say. Actually, that model already exists among Israeli, Palestinian and Jordanian environmentalists — I’ll tell you about it in a second — and the example they set is the best hope for the future.

Here’s why: The Israeli right today, led by Prime Minister Bibi Netanyahu, has some really strong arguments for maintaining the status quo — arguments that in the long run are deadly for Israel as a Jewish democratic state.

“It is the definition of tragedy,” said the Hebrew University philosopher Moshe Halbertal. “You have all these really good arguments for maintaining a status quo that will destroy you.”

What arguments? Israel today is surrounded on four out of five borders — South Lebanon, Gaza, Sinai and Syria — not by states but by militias, dressed as civilians, armed with rockets and nested among civilians. No other country faces such a threat. When Israeli commanders in the Golan Heights look over into Syria today, they see Russian and Iranian military advisers, along with Syrian Army units and Hezbollah militiamen from Lebanon, fighting jihadist Sunni militias — and the jihadists are usually winning. “They’re much more motivated,” an Israeli defense official told me.

That is not a scene that inspires risk-taking on the West Bank, right next to Israel’s only international airport. The fact that Israel unilaterally withdrew from Gaza in 2005 and Hamas took over there in 2007 and then devoted most of its energies to fighting Israel rather than building Palestine also does not inspire risk-taking to move away from the status quo. Israel offered Hamas a cease-fire eight days into the Gaza war, but Hamas chose to expose its people to vast destruction and killing for 43 more days, hoping to generate global pressure on Israel to make concessions to Hamas. It was sick; it failed; and it’s why some Gazans are trying to flee Hamas rule today.

Diplomatically, President Obama on March 17 personally, face-to-face, offered compromise ideas on key sticking points in the Kerry framework to the Palestinian president, Mahmoud Abbas, and asked him point blank if he would accept them. Obama is still waiting for an answer.

Netanyahu and Abbas each moved on some issues, but neither could accept the whole Kerry framework. So the status quo prevails. But this is no normal status quo. It gets more toxic by the day. If Israel retains the West Bank and its 2.7 million Palestinians, it will be creating an even bigger multisectarian, multinational state in its belly, with one religion/nationality dominating the other — exactly the kind of state that is blowing up in civil wars everywhere around it.

Also, the longer this status quo goes on, the more the juggernaut of Israel’s settlement expansion in the West Bank goes on, fostering more Israeli delegitimization on the world stage. Right after the Gaza war, in which the United States basically defended Israel, Israel announced the seizure of nearly 1,000 more acres of West Bank land for settlements near Bethlehem. “No worries,” Israeli officials said, explaining that this is land that Israel would keep in any two-state deal. That would be fine if Israel also delineated the area Palestinians would get — and stopped building settlements there, too. But it won’t. That can only lead to trouble.

“Ironically, most Israeli settlement activity over the last year has been in areas that will plausibly be Israel in any peace map,” said David Makovsky, a member of the Kerry peace team, who is now back at the Washington Institute. “However, by Israel refusing to declare that it will confine settlement activities only to those areas, others do not make the distinction either. Instead, a perception is created that Israel is not sincere about a two-state solution — sadly fueling a European delegitimization drive. Israel’s legitimate security message gets lost because it appears to some that it is really about ideology.” Adds the former U.S. peace negotiator Dennis Ross: “If you say you’re committed to two states, your settlement policy has to reflect that.”

Alas, though, “rather than trying to think imaginatively about how to solve this problem,” said Halbertal, Israel is doing the opposite — “bringing the regional geopolitical problem into our own backyard and pushing those elements in Palestinian society that prefer nonviolence into a dead end. We are setting ourselves on fire with the best of arguments.”

Is anyone trying to build healthy interdependencies? Last week, I had a visit from EcoPeace Middle East, led by Munqeth Mehyar, a Jordanian architect; Gidon Bromberg, an Israeli environmental lawyer; and Nader al-Khateeb, a Palestinian water expert. Yes, they travel together.

They came to Washington to warn of the water crisis in Gaza. With little electricity to desalinate water or pump in chlorine — and Gazans having vastly overexploited their only aquifer — seawater is now seeping in so badly that freshwater is in short supply. Waste management has also collapsed, so untreated waste is being dumped into the Mediterranean, where it moves north with the current, threatening drinking water produced by Israel’s desalination plant in Ashkelon. It is all one ecosystem. Everyone is connected.

Up north, though, EcoPeace helped to inspire — through education, research and advocacy — Israeli, Palestinian and Jordanian mayors to rehabilitate the Jordan River, which they had all turned into an open sewer. Since 1994, Jordan has stored water in the winter from its Yarmouk River in Israel’s Sea of Galilee, and then Israel gives it back to Jordan in the summer — like a water bank. It shows how “prior enemies can create positive interdependencies once they start trusting each other,” said Bromberg.

And that is the point. The only source of lasting security is not walls, rockets, U.N. votes or European demonstrations. It’s relationships of trust between neighbors that create healthy interdependencies — ecological and political. They are the hardest things to build, but also the hardest things to break once in place.

Next up is Mr. Kristof:

The best escalator to opportunity in America is education. But a new study underscores that the escalator is broken.

We expect each generation to do better, but, currently, more young American men have less education (29 percent) than their parents than have more education (20 percent).

Among young Americans whose parents didn’t graduate from high school, only 5 percent make it through college themselves. In other rich countries, the figure is 23 percent.

The United States is devoting billions of dollars to compete with Russia militarily, but maybe we should try to compete educationally. Russia now has the largest percentage of adults with a university education of any industrialized country — a position once held by the United States, although we’re plunging in that roster.

These figures come from the annual survey of education from the Organization for Economic Cooperation and Development, or O.E.C.D., and it should be a shock to Americans.

A basic element of the American dream is equal access to education as the lubricant of social and economic mobility. But the American dream seems to have emigrated because many countries do better than the United States in educational mobility, according to the O.E.C.D. study.

As recently as 2000, the United States still ranked second in the share of the population with a college degree. Now we have dropped to fifth. Among 25-to-34-year-olds — a glimpse of how we will rank in the future — we rank 12th, while once-impoverished South Korea tops the list.

A new Pew survey finds that Americans consider the greatest threat to our country to be the growing gap between the rich and poor. Yet we have constructed an education system, dependent on local property taxes, that provides great schools for the rich kids in the suburbs who need the least help, and broken, dangerous schools for inner-city children who desperately need a helping hand. Too often, America’s education system amplifies not opportunity but inequality.

My dad was a World War II refugee who fled Ukraine and Romania and eventually made his way to France. He spoke perfect French, and Paris would have been a natural place to settle. But he felt that France was stratified and would offer little opportunity to a penniless Eastern European refugee, or even to his children a generation later, so he set out for the United States. He didn’t speak English, but, on arrival in 1951, he bought a copy of the Sunday edition of The New York Times and began to teach himself — and then he worked his way through Reed College and the University of Chicago, earning a Ph.D. and becoming a university professor.

He rode the American dream to success; so did his only child. But while he was right in 1951 to bet on opportunity in America rather than Europe, these days he would perhaps be wrong. Researchers find economic and educational mobility are now greater in Europe than in America.

That’s particularly sad because, as my Times colleague Eduardo Porter noted last month, egalitarian education used to be America’s strong suit. European countries excelled at first-rate education for the elites, but the United States led the way in mass education.

By the mid-1800s, most American states provided a free elementary education to the great majority of white children. In contrast, as late as 1870, only 2 percent of British 14-year-olds were in school.

Then the United States was the first major country, in the 1930s, in which a majority of children attended high school. By contrast, as late as 1957, only 9 percent of 17-year-olds in Britain were in school.

Until the 1970s, we were pre-eminent in mass education, and Claudia Goldin and Lawrence Katz of Harvard University argue powerfully that this was the secret to America’s economic rise. Then we blew it, and the latest O.E.C.D. report underscores how the rest of the world is eclipsing us.

In effect, the United States has become 19th-century Britain: We provide superb education for elites, but we falter at mass education.

In particular, we fail at early education. Across the O.E.C.D., an average of 70 percent of 3-year-olds are enrolled in education programs. In the United States, it’s 38 percent.

In some quarters, there’s a perception that American teachers are lazy. But the O.E.C.D. report indicates that American teachers work far longer hours than their counterparts abroad. Yet American teachers earn 68 percent as much as the average American college-educated worker, while the O.E.C.D. average is 88 percent.

Fixing the education system is the civil rights challenge of our era. A starting point is to embrace an ethos that was born in America but is now an expatriate: that we owe all children a fair start in life in the form of access to an education escalator.

Let’s fix the escalator.

And the highways, and the bridges, and…  Last but not least we have Mr. Bruni:

I’m always thinking back to that lunch in Kennebunkport, because I saw it all there: what drove George W. Bush toward the presidency; what shaped so many of his decisions in office.

I was interviewing his parents at the family’s compound on the Maine coast. The 2000 Republican National Convention was just weeks away, and Bush by then was a well-established political phenomenon. Even so, his father said that he remained amazed that George had made it so far. Never had George’s parents seen such a grand future for him.

Perhaps an hour into our conversation, George’s brother Jeb, the Bush boy who had been tagged for greatness, happened to join us. From that moment on, when I asked his father a question, he’d sometimes say that Jeb should answer it, because Jeb knew best.

And as he gazed at Jeb, I noticed in his eyes what George must have spotted, craved and inwardly raged about for so much of his life: an admiration that he had been hard pressed to elicit. Running for the presidency was his way of demanding it. Winning the White House was his way of finally getting it.

And he went on to govern in defiance of the father who had cast such a long shadow over him and nursed such doubts about him. He went on to show him who was boss. No matter the cost, he invaded Iraq and toppled Saddam Hussein, whom Dad had spared. No matter the tactics, he secured a second term, which Dad hadn’t.

Will he whitewash all of this in the tribute that he has written to his father, “41,” which is scheduled for publication right after the midterms? I’m guessing yes, but whatever the evasions or revisionism of “41,” it will be more than just a book. It will be the latest chapter in a father-son psychodrama that altered the country’s course.

And it will be a reminder of how many other father-son psychodramas did likewise.

While Bush is only the second child of a president to duplicate his dad’s ascent, he’s hardly the first occupant of the Oval Office whose career can be read as a response to his father’s dominance or disappearance, an answer to his father’s example. The history of American politics is a history of daddy issues, of sons who felt compelled to impress, outdo, usurp, avenge or redeem their fathers.

There are striking leitmotifs. Neither Barack Obama nor Bill Clinton ever really knew his father, and it’s impossible to divorce either’s ambition from that absence. The two men have said as much themselves.

Clinton’s father died in an accident just three months before he was born, leaving the future president with “the feeling that I had to live for two people” and “make up for the life he should have had,” he wrote in his autobiography, “My Life.”

“And his memory infused me, at a younger age than most, with a sense of my own mortality,” he continued. “The knowledge that I, too, could die young drove me both to try to drain the most out of every moment of life and to get on with the next big challenge.”

Shortly after Obama’s birth, his parents separated. Obama saw his father only once subsequently, when he was 10 years old and his father traveled from Kenya to Hawaii for a monthlong visit. The brevity of that contact — the distance between father and son — informed the narrative and title of his memoir, “Dreams From My Father,” and was a principal engine of his accomplishments.

“If you have somebody that is absent, maybe you feel like you’ve got something to prove when you’re young, and that pattern sets itself up over time,” he said in an interview with Newsweek in 2008. It’s a pattern detectable in many presidents.

In a 2012 story for Slate titled “Why Do So Many Politicians Have Daddy Issues?” Barron YoungSmith wrote, “American politics is overflowing with stories of absent fathers, alcoholic fathers, neglectful fathers.”

TO look back through the years is to see presidents in rebellion against their fathers and presidents in thrall to them, presidents trying to be bigger and better than the fathers who let them down (Abraham Lincoln, Ronald Reagan) as well as presidents living out the destinies that their fathers scripted for them (John F. Kennedy, William Howard Taft). It’s to behold the inevitably fraught father-son dynamic playing out on the gaudiest stages, with the most profound consequences.

Did Clinton’s unappeasable needs come from the enormous hole that his father left? Did Obama develop his aloofness early, as a shield against the kind of disappointment that his father caused him?

The particular imprints of fathers on sons have been conspicuous in the leading characters from the most recent presidential elections. Paul Ryan was just 16 when he discovered his father dead of a heart attack. He grew up fast, and became zealous about physical fitness. Mitt Romney was trying to complete his own father’s failed quest for the presidency, and at the start of debates where he was allowed notepaper, he’d scrawl “Dad” on the blank sheet.

Al Gore, too, was attending to the unfinished business of his father, who had made it to the Senate but never the White House. And John McCain, the son and grandson of four-star admirals in the Navy, was trying to do those generations of men proud.

The country’s presidents and presidential aspirants were of course also trying to please and honor mothers, and the presidency is perhaps just as much a history of mommy issues. But there’s something singular about the father-son face-off, as there is about the mother-daughter pas de deux. In the parent whose gender we share, we’re more likely to find our yardstick, our template, our rival.

And with fathers and sons, there’s a special potential for misunderstanding, for the kind of chasm in which resentments and compulsions flourish. Men aren’t socialized to express their feelings, to speak their hearts, to talk it out.

So sons and fathers often stand at the greatest remove, neither able to read the other. From what I’ve witnessed, from what I personally know, many men spend the early part of our lives misjudging our fathers, and acting out accordingly, and then the latter part finally coming to know them. It’s one of our longest journeys.

And maybe George W. Bush — who styled himself as the kind of folksy Texan that his father wasn’t — is at last completing his. Maybe he’s reached a point of uncomplicated appreciation. How different things might have been if he’d arrived there earlier.


Follow

Get every new post delivered to your Inbox.

Join 162 other followers