Publications by Type: Newspaper Article

2012
Bouras, Stelios, and Harris Mylonas. 2012. “As Athens's Woes Grow, Neo-Nazis Gain Clout”. Publisher's Version
DelRosario, John S, and Donna Hicks. 2012. “Granting People Their Dignity”. Publisher's Version
Friedman, Thomas L, James A Robinson, and Daron Acemoglu. 2012. “Why Nations Fail”. Publisher's Version
Owen, Roger E. 2012. “Five Myths About Syria”. Publisher's Version
Several centuries ago, there was a nation that rose to become a world power on the strength of its innovation and its dedication to capitalist enterprise. It became a major center of trade, a financial powerhouse whose name was well known across the planet. It was blessed with an unusual society that rewarded talent and hard work, not social position—one of the few places where a person who had nothing could realistically dream of a far better life. And then this vibrant place, the envy of the world, suddenly collapsed. Its economy shrank; its people left.

The place was Venice, and if it is hard to imagine the charming tourist destination was once one of the richest places on the Earth, then that is precisely what MIT economist Daron Acemoglu wants me to understand. I had come to the Sloan School of Management cafeteria, its tall windows framing the Charles River, for coffee and a discussion of his favorite topic—why nations fail. It is a question that has intrigued people for thousands of years, but now Acemoglu and Harvard’s James Robinson offer an answer in an ambitious new book. Their theory, the fruit of a long intellectual partnership and research that digs back to the origins of agriculture, explains why some countries succeed and others do not, why some are awash in prosperity, while others are consumed by poverty and suffering. It explains how a city-state like Venice can rise to prominence, then quickly fail. And it offers a chastening message about the prospects for our own country.

So why do nations fail? Acemoglu has a one-word answer: “Politics.’’

What this means, he explains, is that nations succeed in the long term when they are able to share power broadly. They either develop inclusive institutions or “extractive institutions,’’ designed to plunder wealth for the few.

Throughout history, says Acemoglu, “the great struggle is between the masses and elites who seek to capture the government and put it to their own uses.’’

It is a less obvious answer, with more surprising implications, than is immediately apparent.

To begin with, consider the factors that the two reject. Geography, for example, has long been a favorite explanation for the success of nations. Some places are blessed with natural advantages, while others are not. Certainly, sitting on coveted goods brings great wealth: witness Saudi Arabia or, for that matter, Russia. But over the long reach of history, geography fails to explain which nations have staying power. One can make a convincing list of all the geographical benefits that have accrued to the United States, but when Europeans first arrived in the 15th century, it was South America, not North, that was rich and (relatively) thickly settled.

I did a brief interview for All Things Considered last Friday, on the topic of media handling of the current war scare over Iran. Here's a link to the story, which ran over the weekend.

The interview got me thinking about the issue of media coverage of this whole business, and I'm sorry to say that most mainstream news organizations have let us down again. Although failures haven't been as egregious as the New York Times and Washington Post's wholesale swallowing of the Bush administration's sales pitch for war in 2002, on the whole the high-end media coverage has been disappointing. Here are my Top Ten Media Failures in the 2012 Iran War Scare.

#1: Mainstreaming the war. As I've written before, when prominent media organizations keep publishing alarmist pieces about how war is imminent, likely, inevitable, etc., this may convince the public that it is going to happen sooner or later and it discourages people from looking for better alternatives. Exhibits A and B for this problem are Jeffrey Goldberg's September 2010 article in The Atlantic Monthly and Ronan Bergman's February 2012 article in the New York Times Magazine. Both articles reported that top Israeli leaders believed time was running out and suggested that an attack might come soon.

#2: Loose talk about Iran's “nuclear [weapons] program.” A recurring feature of Iran war coverage has been tendency to refer to Iran's “nuclear weapons program” as if its existence were an established fact. U.S. intelligence services still believe that Iran does not have an active program, and the IAEA has also declined to render that judgment either. Interestingly, both the Times' public editor Arthur Brisbane and Washington Post ombudsman Patrick Pexton have recently chided their own organizations for muddying this issue.

#3: Obsessing about Ahmadinejad. A typical insertion into discussions of Iran is to make various references to Iranian President Mahmoud Ahmadinejad, usually including an obligatory reference to his penchant for Holocaust denial and his famously mis-translated statement about Israel “vanishing from the page of time.” This feature is often linked to the issue of whether Iran's leaders are rational or not. But the obsession with Ahmadinejad is misleading in several ways: he has little or no influence over Iran's national security policy, his power has been declining sharply in recent months, and Supreme Leader Ali Khameini - who does make the key decisions - has repeatedly said that nuclear weapons are contrary to Islam. And while we're on the subject of Iranian “rationality,” it is perhaps worth noting that its leaders weren't goofy enough to invade Iraq on a pretext and then spend trillions of dollars fighting an unnecessary war there.

#4: Ignoring Iranian weakness. As I've noted before, Iran is not a very powerful country at present, though it does have considerable potential and could exert far more international influence if its leaders were more competent. But its defense budget is perhaps 1/50th the size of U.S. defense spending, and it has no meaningful power-projection capabilities. It could not mount a serious invasion of any of its neighbors, and could not block the Strait of Hormuz for long, if at all. Among other things, that is why it has to rely on marriages of convenience with groups like Hezbollah or Hamas (who aren't that powerful either). Yet as Glenn Greenwald argues here, U.S. media coverage often portrays Iran as a looming threat, without offering any serious military analysis of its very limited capabilities.

#5: Failing to ask why Iran might want a bomb. Discussions of a possible war also tend to assume that if Iran does in fact intend to get a nuclear weapon, it is for some nefarious purpose. But the world's nine nuclear powers all obtained these weapons first and foremost for deterrent purposes (i.e., because they faced significant external threats and wanted a way to guarantee their own survival). Iran has good reason to worry: It has nuclear-armed states on two sides, a very bad relationship with the world's only superpower, and more than three dozen U.S. military facilities in its neighborhood. Prominent U.S. politicians repeatedly call for “regime change” there, and a covert action campaign against Iran has been underway for some time, including the assassination of Iranian civilian scientists.

#6: Failing to consider why Iran might NOT want a bomb. At the same time, discussions of Iran's nuclear ambitions often fail to consider the possibility that Iran might be better off without a nuclear weapons capability. As noted above, Supreme Leader Khameini has repeatedly said that nuclear weapons are contrary to Islam, and he may very well mean it. He could be lying, but that sort of lie would be risky for a regime whose primary basis for legitimacy is its devotion to Islam. For another, Iran has the greatest power potential of any state in the Gulf, and if it had better leadership it would probably be the strongest power in the region. If it gets nuclear weapons some of its neighbors may follow suit, which would partly negate Iran's conventional advantages down the road. Furthermore, staying on this side of the nuclear weapons threshold keeps Iran from being suspected of complicity should a nuclear terrorist attack occur somewhere. For all these reasons, I'd bet Iran wants a latent nuclear option, but not an actual nuclear weapon. But there's been relatively little discussion of that possibility in recent media coverage.

#7: Exaggerating Israel's capabilities. In a very real sense, this whole war scare has been driven by the possibility that Israel might feel so endangered that they would launch a preventive war on their own, even if U.S. leaders warned them not to. But the IDF doesn't have the capacity to take out Iran's new facility at Fordow, because they don't have any aircraft that can carry a bomb big enough to penetrate the layers of rock that protect the facilities. And if they can't take out Fordow, then they can't do much to delay Iran's program at all and the only reason they might strike is to try to get the United States dragged in. In short, the recent war scare-whose taproot is the belief that Israel might strike on its own-may be based on a mirage.

#8: Letting spinmeisters play fast and loose with facts. Journalists have to let officials and experts express their views, but they shouldn't let them spout falsehoods without pushing back. Unfortunately, there have been some egregious cases where prominent journalists allowed politicians or government officials to utter howlers without being called on it. When Rick Santorum announced on Meet the Press that “there were no inspectors” in Iran, for example, host David Gregory didn't challenge this obvious error. (In fact, Iran may be the most heavily inspected country in the history of the IAEA).

Even worse, when Israeli ambassador Michael Oren appeared on MSNBC last week, he offered the following set of dubious claims, without challenge:

“[Iran] has built an underground nuclear facility trying to hide its activities from the world. It has been enriching uranium to a high rate [sic.] that has no explanation other than a military nuclear program - that has been confirmed by the International Atomic Energy Agency now several times. It is advancing very quickly on an intercontinental ballistic missile system that's capable of carrying nuclear warheads.”

Unfortunately, MSNBC host Andrea Mitchell apparently didn't know that Oren's claims were either false or misleading. 1) Iran's underground facility was built to make it hard to destroy, not to “hide its activities,” and IAEA inspectors have already been inside it. 2) Iran is not enriching at a "high rate" (i.e., to weapons-grade); it is currently enriching to only 20% (which is not high enough to build a bomb). 3) Lastly, Western intelligence experts do not think Iran is anywhere near to having an ICBM capability.

In another interview on NPR, Oren falsely accused Iran of “killing hundreds, if not thousands of American troops,” a claim that NPR host Robert Siegel did not challenge. Then we got the following exchange:

Oren: “Imagine Iran which today has a bunch of speedboats trying to close the Strait of Hormuz. Imagine if Iran has a nuclear weapon. Imagine if they could hold the entire world oil market blackmailed. Imagine if Iran is conducting terrorist organizations through its terrorist proxies - Hamas, Hezbollah. Now we know there's a connection with al-Qaida. You can't respond to them because they have an atomic weapon.”

Siegel: “Yes. You're saying the consequences of Iran going nuclear are potentially global, and the consequences of a U.S. strike on Iran might also be further such attacks against the United States...”

Never mind the fact that we have been living in the nuclear age for some 60 years now, and no nuclear state has even been able to conduct the sort of aggressive blackmail that Oren suggests Iran would be able to do. Nuclear weapons are good for deterrence, and not much else, but the news media keep repeating alarmist fantasies without asking if they make sense or not.

Politicians and government officials are bound to use media moments to sell whatever story they are trying to spin; that's their job. But It is up to journalists to make this hard, and both Mitchell and Siegel didn't. (For another example of sloppy fact-checking, go here).

#9. What about the human beings? One of the more bizarre failures of reporting on the war debate has been the dearth of discussion of what an attack might mean for Iranian civilians. If you take out some of Iran's nuclear facilities from the air, for example, there's a very real risk of spreading radioactive material or other poisonous chemicals in populated areas, thereby threatening the lives of lots of civilians. Yet when discussing the potentially dangerous consequences of a war, most discussions emphasize the dangers of Iranian retaliation, or the impact on oil prices, instead of asking how many innocent Iranian civilians might die in the attack. You know: the same civilians we supposedly want to liberate from a despotic clerical regime.

#10. Could diplomacy work? Lastly, an underlying theme in a lot of the coverage is the suggestion that diplomacy is unlikely to work, because it's been tried before and failed. But the United States has had very little contact with Iranian officials over the past thirty years, and only one brief set of direct talks in the past three years. Moreover, we've insisted all along that Iran has to give up all nuclear enrichment, which is almost certainly a deal-breaker from Tehran's perspective. The bottom line is that diplomacy has yet to succeed - and it might not in any case - but it's also never been seriously tried.

I'm sure you can find exceptions to the various points I've made here, especially if you move outside major media outlets and focus on online publications and the blogosphere. Which may be why more people are inclined to get their news and analysis there, instead of from the usual outlets. But on the whole, Americans haven't been well-served by media coverage of the Iran debate. As the president said last week, “loose talk” about an issue like this isn't helpful.

The killing of 16 Afghan civilians - nine of them children—by a rogue U.S. soldier is a tragedy in several senses. First, because of the loss of innocent life. Second, because the alleged perpetrator is likely someone whose psyche and spirit broke under the pressure of a prolonged counterinsurgency campaign. And third, because it was all so unnecessary.

Because Barack Obama has run a generally hawkish foreign policy, his Republican opponents don't have a lot of daylight to exploit on that issue. But if they weren't so preoccupied with sounding tough, they could go after Obama's foolish decision to escalate the war in Afghanistan back in 2009, which remains his biggest foreign policy blunder to date.

A brutal reality is that counterinsurgency campaigns almost always produce atrocities. Think My Lai, Abu Ghraib, the Haditha massacre, and now this. You simply can't place soldiers in the ambiguous environment of an indigenous insurgency, where the boundary between friend and foe is exceedingly hard to discern, and not expect some of them to crack and go rogue. Even if discipline holds and mental health is preserved, a few commanders will get overzealous and order troops to cross the line between legitimate warfare and barbarism. There isn't a “nice” way to wage a counterinsurgency—no matter how often we talk about “hearts and minds”—which is why leaders ought to think long and hard before they order the military to occupy another country and try to remake its society. Or before they decide to escalate a war that is already underway.

And the sad truth is that this shameful episode would not have happened had Obama rejected the advice of his military advisors and stopped trying to remake Afghanistan from the start of his first term. Yes, I know he promised to get out of Iraq and focus on Central Asia, but no president fulfills all his campaign promises (remember how he was going to close Gitmo?) and Obama could have pulled the plug on this failed enterprise at the start. Maybe he didn't for political reasons, or because commanders like David Petraeus and Stanley McChrystal convinced him they could turn things around. Or maybe he genuinely believed that U.S. national security required an open-ended effort to remake Afghanistan.

Whatever the reason, he was wrong. The sad truth is that the extra effort isn't going to produce a significantly better outcome, and the lives and money that we've spent there since 2009 are mostly wasted. That was apparent before this weekend's events, which can only make our futile task even more impossible.

Here's what I wrote about this situation back in November 2009:

“America's odds of winning this war are slim. The Karzai government is corrupt, incompetent and resistant to reform. The Taliban have sanctuaries in Pakistan and can hide among the local populace, making it possible for them simply to outlast us. Pakistan has backed the Afghan Taliban in the past and is not a reliable partner now. Our European allies are war-weary and looking for the exits. The more troops we send and the more we interfere in Afghan affairs, the more we look like foreign occupiers and the more resistance we will face. There is therefore little reason to expect a U.S. victory.”

It didn't take a genius to see this, and I had lots of company in voicing my doubts. It gives me no pleasure to recall it now. Indeed, I wish the critics had been proven wrong and Obama, Petraeus, McChrystal, et al. had been proven right. I concede that the situation in Afghanistan may get worse after we depart, and the more civilians will die at the hands of the Taliban, or as a consequence of renewed civil war. But the brutal fact remains: the United States can't fix that country, it is not a vital U.S. interest that we try, and we should have been gone a long time ago.

George Orwell’s classic novel “Animal Farm” is the definitive depiction of how any rebellion or social revolt risks not just failure but a reversal where one type of domination is merely exchanged for another. After the leaders of the animal rebellion take over, they impose a single commandment: “All animals are equal, but some animals are more equal than others.”

It is not exactly the same, but recent developments in Acehnese politics are reminiscent of the animal farm. The Aceh Party, which was spawned by the separatist Free Aceh Movement (GAM), is heading in a worrying direction. Internal conflict among former combatants, as well as their desire to dominate the seats of power in the province, is driving Aceh into another phase of uncertainty.

If the Aceh Party members continue to behave undemocratically, they will go down in history as nothing more than a ragtag bunch of ignoble former rebels who behaved eerily like their former “enemies.”

GAM was an ethnic nationalist movement that mobilized resistance through nationalistic fervor. The roots of the movement were in past injustices, but the conflict later evolved into an antagonistic identity dispute between Aceh and Jakarta.

Especially during the New Order, the conflict reached a level where the idea of an independent Aceh became entrenched as a result of endless oppression and unjust treatment.

As a movement, GAM took advantage of this. It pledged a promised land where democracy would rule and injustice would be a thing of the past. All of Aceh was dragged by the rebels into this independence narrative and into the lengthy struggle.

The rebels in Aceh laid down their arms with the Helsinki peace agreement in 2005. The agreement brought an end to 30 years of war and provided a significant opportunity for the local people to manage their own affairs and participate in a democratic process as Aceh became a special autonomous region.

All the trouble in Aceh was supposed to end there. Today, the reality is that it continues, and it is stubborn.

The seeds for the current tension were planted with the first gubernatorial election soon after the peace agreement. The leadership of the rebels in exile supported a candidate who was not supported by the majority of former combatants. Ignoring the opinions of the former field commanders, the exiled leaders went ahead with their candidate — who ended up losing by a landslide.

The field commanders had used their networks of former combatants to provide strong backing for their candidate. Irwandi Yusuf was elected as the first governor of post-peace agreement Aceh, but his defiant victory upset the exiled leaders.

These divided camps seemed to have reconciled in the legislative elections, when the exiled leadership and the field commanders agreed to jointly form a political party called Partai Aceh (Aceh Party) to stand a better chance of winning. The reconciliation bore fruit, with the Aceh Party winning the majority of the seats.

Again, the field commanders and their networks provided the crucial machinery to ensure the victory.

Winning a majority of the seats in the provincial legislature was supposed to put GAM in full control of the province and close the chapter on the rebellion, but it did not. Another problem was to about to surface.

The Aceh Party, which was and is closely controlled by the exiled former leadership, had not forgotten the embarrassment of that first gubernatorial election and began working toward revenge.

It started a low-level campaign against their unwanted elected governor, meaning that Aceh’s legislature, since the 2010 elections, has been a legislature that measures its success by how badly it can undermine Irwandi. Most of the policies introduced by the executive arm of the government are constantly being undermined by its legislative arm.

This time, the exiled leaders are in full control of the field commanders and legislature members who, by now, mostly pledge loyalty to the Aceh Party. For many field commanders, the Aceh Party is their vehicle to control the province both politically and economically. To achieve that goal, many of them have decided to stick together.

This is the struggle that we see playing out today in the run-up to the second gubernatorial election. The Aceh Party supports the former exiled leader Zaini Abdullah and former GAM commander Muzakir Manaf, and refuses to support Irwandi despite the governor’s popularity.

To ensure the governor cannot even compete in the election they went so far as to propose a revision of the Election Law to bar independent candidates from running in elections.

The dispute over independent candidates was politically motivated, intended to stop Irwandi and many other ex-rebels running in the election. Fortunately, it failed, though only after the Constitutional Court’s decision safeguarded the national law. Had it been successful, this attempt to block independent candidates would have been a reversal of democratic progress for the entire country.

It is a nasty game in Aceh, where the players are willing to go so far as to undermine democratic progress and the peace process for their own purposes of retaliation, punishment and control — where all parties are equal, but some are more equal than others.

Grietens, Sheena Chestnut. 2012. “A North Korean Corleone”. Publisher's Version Abstract

What kind of deal do you make with a 20-something who just inherited not only a country, but also the mantle of one of the world’s most sophisticated crime families? When Kim Jong-un, who is thought to be 28 or 29, became North Korea’s leader in December after the death of his father, Kim Jong-il, he became the de facto head of a mafia state.

How the new leader combines the roles of head of state and mafia don will influence the regime’s future behavior. Crime bosses have different incentives, and dealing with them requires different policies. And any deal—including last week’s agreement by North Korea to suspend its nuclear program in exchange for American food aid - will eventually falter if that reality is ignored.

Kim Jong-un confronts the same problem faced by every dictator: how to generate enough money to pay off the small group of elite supporters—army generals, party and family—who keep him in power. Other autocrats use oil wealth or parcel out whole industries to cronies.

But whoever rules North Korea has less to work with than most. The country defaulted in the 1970s, losing access to international credit, and Soviet subsidies ended with the cold war. In the 1990s, the founder and “eternal president,” Kim Il-sung, died just as a series of natural disasters devastated food production. The country has been an economic and humanitarian basket case ever since.

Kim Jong-il, who began training to run the country in the 1970s and inherited it after his father’s death, came up with an unconventional solution: state-sponsored organized crime. Counterfeit cigarettes and medicine, drugs, insurance fraud, fake money, trafficking people and endangered species - for decades, the Kim regime has done it all. Its operations became so extensive and well coordinated that American officials nicknamed it the “Soprano state,” after the hit HBO television series.

In the 1970s, after the default, North Korea used diplomats as drug mules to keep embassies running. When that got them kicked out of multiple countries and the economy tanked in the 1990s, Kim Jong-il began producing drugs at home, thereby avoiding a major cost plaguing drug lords elsewhere: law enforcement.

He managed these operations through Bureau 39, a mysterious office under the Central Committee of the Korean Workers’ Party. But to create plausible deniability, he outsourced distribution to Russian mafia, Japanese yakuza and Chinese triad gangs, who met North Korean military forces for drug drops at sea. The regime also manufactured the world’s best counterfeit dollars - so good that they reportedly forced the Treasury to redesign the $100 bill - and used a crime ring connected to the Official Irish Republican Army, a Marxist offshoot of the I.R.A., to launder them in Europe. They even made fake Viagra.

The Agreed Framework that froze North Korea’s nuclear program in October 1994 didn’t stop these activities; they actually increased. Despite its other benefits, the framework didn’t address the fundamental hard currency needs of the North Korean leadership.

This criminal legacy means that Kim Jong-un has even more on his plate than one might think. In addition to running a country that is an economic and humanitarian disaster and a geopolitical hot spot, he also has to manage a global criminal racket. That’s a lot for any 20-something to handle. (As “Sopranos” fans know, A. J.’s taking over for Tony might not have been good for business.)

Despite the seemingly stable transition so far, Kim Jong-un is under pressure. Elite party members who supported his father will be skeptical of his untested ability to fulfill his side of their cash-for-support bargain. And North Korea needs more money than usual this year to celebrate the anniversaries of Kim Il-sung and Kim Jong-il. (In the '70s, one of the first things Kim Jong-il used foreign currency for was a campaign to glorify his father.) Any sign that Kim Jong-un can’t satisfy supporters could crack the facade of elite solidarity.

What’s an aspiring kingpin to do?

First, find the money. Kim Jong-un seems to have done that. One of the last photos released of Kim Jong-il shows him riding a supermarket escalator. Behind him are Kim Jong-un and Jon Il-chun, manager of the infamous Bureau 39.

Second, control the people who earn the money. Illicit activity brings the risk of freelancing, especially when you’re forced to let others do the distribution. As North Korea outsourced the drug trade, its profit margins dropped—and more and more insiders skimmed off the system to line their pockets. Today, reports indicate that methamphetamine is widely used in North Korea (partly because it dulls hunger pains), and the state is cracking down on the trade it once monopolized. Even Kim Jong-il couldn’t maintain perfect control and had to send operatives abroad to retrieve misbehaving agents. These are delicate tasks easily botched by a novice.

Finally, keep the money coming. Criminal activity was never North Korea’s ultimate objective; the aim was always hard currency. Kim Jong-un needs cash without political conditions to stay in power. But there aren’t many good options for getting it these days, which is why North Korea is likely to pursue new and expanded forms of illicit activity.

Criminal activities are attractive because other sources of money have strings attached. Remittances from defectors, which have risen recently, don’t go to leaders, and they let in information. North Korea could bank on economic reform or Chinese aid, but reform won’t necessarily provide money for the elite, and aid makes Pyongyang uneasily dependent on Chinese patronage.

The cardinal fear of national security experts—which partly motivated last week’s agreement - is that Pyongyang will make money through nuclear proliferation. After all, North Korea is alleged to have helped build the Syrian nuclear reactor that Israel destroyed in 2007. But it may be hard for North Korea to find a buyer; tests of its plutonium warheads have been a questionable technical success, and their uranium-enrichment program may not be advanced enough to make them an attractive seller.

That leaves crime. Last week’s deal does not change the probability that North Korea will engage in it. And new lines of business probably won’t look like the old ones; North Korea’s schemes are creative and highly adaptable.

When drugs and counterfeit dollars got too much exposure, the regime shifted toward cigarettes and insurance fraud. Last summer, South Korean authorities discovered North Korea’s involvement in a hacking ring that exploited online gaming sites to win points and exchange them for cash, making $6 million in two years. Given that cybercriminals across the world gross over $100 billion annually, a country with decent cyberwarfare capabilities could probably do well for itself.

Or could North Korea go legit? Publicly at least, there haven’t been major seizures of its drugs or counterfeit currency in several years, leading analysts to speculate that targeting the country’s illicit finances successfully crippled those particular earning schemes. And Kim Jong-il’s death does give North Korea an opportunity to get out of the game.

BUT legitimacy won’t solve Kim Jong-un’s problem. Right now his survival is guaranteed by hard currency, and the best source of it is illicit activity. That’s why previous American efforts sought to shut off these activities: to convince the regime it had to reform itself to survive.

That didn’t go quite far enough. Shutting down those activities works only so long as North Korea can’t find new ones. The key to survival was not any one illicit activity but the ability to adapt from one to another—an ability that, with Kim Jong-il gone, likely rests with just a few trusted people. Those people, their loyalties and their relationships are now Kim Jong-un’s biggest vulnerability. If North Korea loses its capacity to adapt, it will lose the ability to make money illicitly—and will have to choose reform.

For America to make successful deals with North Korea, we must first grasp that its leader faces not just a dictator’s problems, but those of a mafia boss. And if you make a deal with the Godfather, you must not overlook the interests of the consigliere standing behind him.

The 2012 general election campaign is likely to be a fight for every last vote, which means that it will also be a fight over who gets to cast one.

Partisan skirmishing over election procedures has been going on in state legislatures across the country for several years. Republicans have called for cutbacks in early voting, an end to same-day registration, higher hurdles for ex-felons, the presentation of proof-of-citizenship documents and regulations discouraging registration drives. The centerpiece of this effort has been a national campaign to require voters to present particular photo ID documents at the polls. Characterized as innocuous reforms to preserve election integrity, beefed-up ID requirements have passed in more than a dozen states since 2005 and are still being considered in more than 20 others.

Opponents of the laws, mostly Democrats, claim that they are intended to reduce the participation of the young, of the poor and of minorities, who are most likely to lack government-issued IDs—and also most likely to vote Democratic.

Conflict over exercising the right to vote has been a longstanding theme in our history. The overarching trend, which we celebrate, has been greater inclusion: property requirements were dropped; racial barriers were formally eliminated; women were enfranchised.

Yet there have always been counter trends. While the franchise expanded during some moments and in some places, it contracted in others, depriving Americans of a right they had once held. Between 1790 and 1850 - the period when property requirements were being dropped—four Northern states disenfranchised African-American voters, and New Jersey halted a 17-year experiment permitting women to vote. During this same period, nine states passed laws excluding “paupers” from political rights.

After Reconstruction, both major political parties attempted to constrict the electorate, albeit in different locales. In the South—as is well known—Democratic state legislatures employed a variety of devices, including literacy tests, poll taxes, “understanding” clauses and, eventually, Democratic primaries restricted to whites. As a result, African Americans were largely excluded from electoral participation from the 1890s until the 1960s.

In the North, similar, if less draconian, legal changes, generally sponsored by Republicans, targeted (among others) the millions of immigrant workers pouring into the country. In 1921, for example, New York State adopted an English-language literacy requirement for voters that remained in force (and was enforced) for decades. Almost invariably, these new limits on the franchise were fueled by partisan interests and ethnic or racial tensions; they were embraced by respectable Americans, like the eminent historian Francis Parkman, who had come to view universal suffrage as a “questionable blessing.”

Many of the late nineteenth- and early twentieth-century laws operated not by excluding specific classes of citizens but by erecting procedural obstacles that were justified as measures to prevent fraud or corruption. It was to “preserve the purity of the ballot box” that legislatures passed laws requiring voters to bring their sealed naturalization papers to the polls or to present written evidence that they had canceled their registration at any previous address or to register annually, in person, on one of only two Tuesdays.

The new procedures were widely recognized, by both their advocates and their targets, as having a far greater impact on some groups of voters—immigrants, blue-collar workers, the poor—than on others, and they often succeeded. In Pittsburgh in 1906, a personal registration law, sponsored by Republicans to check the influence of a crusading reformer, cut the number of registered voters in half.

In the 1930s, “pauper exclusion” laws were invoked to disenfranchise jobless men and women who were receiving relief. In 2000, Massachusetts disenfranchised prisoners after they formed an organization to promote inmate rights.

The targets of exclusionary laws have tended to be similar for more than two centuries: the poor, immigrants, African Americans, people perceived to be something other than “mainstream” Americans. No state has ever attempted to disenfranchise upper-middle-class or wealthy white male citizens.

The current wave of procedural restrictions on voting, including strict photo ID requirements, ought to be understood as the latest chapter in a not always uplifting story: Americans of both parties have sometimes rejected democratic values or preferred partisan advantage to fair democratic processes. Acknowledging the realities of our history should lead all of us to be profoundly skeptical of laws that burden, or impede, the exercise of what Lyndon B. Johnson called “the basic right, without which all others are meaningless.” More is at stake here than the outcome of the 2012 election. Even a cursory survey of world events over the last 20—or 100—years makes plain that democracies are fragile, that democratic institutions can be undermined from within. Ours are no exception.

The Cold War and the early post-Cold War periods were relatively easy to define and comprehend. The first was roughly the struggle between two superpowers forming a bipolar system where almost every state had to choose a side. What followed was a period described by Fukuyama as “The End of History” announcing the triumph of liberal ideas. The US was a global hegemon: selecting when to intervene, expanding NATO’s reach, and dominating international institutions. Following the September 11 attacks unilateralism was exposed and thereafter multilateralism appeared—with its limitations. Today, “regional multilateralism” may be the next paradigm that can bring about peace, cooperation, and stability in global affairs.

The rise and fall of US hegemony during the 1990s has been documented. The Unipolar moment, a Foreign Affairs article by Charles Krauthammer, encapsulates the main point in the title. It was a moment. Once this “moment” was over, Fareed Zakaria and others have been imagining a “post-American world.”

US power and its global role remain at the core of the contemporary discussion. America still is—and probably will remain for a long time—the world’s undisputed leader in military, economic and technological power. However, the politics of austerity at home and pressing realities abroad necessitate a new US foreign policy. The US cannot go it alone.

Indeed, the US has been refocusing its foreign policy and Obama has been using the term multilateralism repeatedly. Multilateralism is a prudent strategy for the U.S. and the international system at large, however it is incomplete. Multilateralism has reached its limits when it comes to Iran’s nuclear program, recognition of Palestine, the six party talks on North Korea, and Kosovo’s independence—to name just a few thorny issues.

In a world of diminished US involvement and unsuccessful multilateralist endeavors, an alternative vision for global engagement is necessary. Instead we are faced with a reluctant China, an unprepared India, an European Union in the midst of a financial debacle and a host of regional powers that focus on their neighborhood rather than claiming a global role. Given these realities, regional multilateralism can serve as the way out from this dead end.

Regionally, the Middle East is as explosive as ever. The Western Balkans are doubtful about their future within the European Union and may again implode. The African continent has many ongoing conflicts and even more potential ones unresolved. In Latin America at least two alternative visions for the region are competing. The Far East is actively searching ways to live with the rise of China. These and other contemporary problems can be better solved at the regional rather than the bilateral or global levels.

This context highlights the importance of regional integration and multilateralism. Regional multilateralism is building on these very ideas. Bringing these two together is necessary in today’s world. The buds of regional integration are everywhere in the making but they have not yet been clearly connected with the principles of multilateralism.

The EU serves as an example of regional integration and others are following its steps. The African Union has also stepped up its peacekeeping efforts and moved toward further economic integration. However, the quest for regional multilateralism should not be confined by a conventional understanding of geography. For instance, Russia may be a force for stability both in the Far East and Central Asia, China may have a stake in the affairs of Latin America and Africa, and different parts of what we call the Middle East may integrate with parts of Central Asia or Europe. The very prospect of Turkey joining the EU may be a sign of such developments.

Cross-regional cooperation is key to regional multilateralism. The transatlantic dialogue model between the US and Europe can and should be exported. For instance, in the Far East the US and the EU both cooperate with ASEAN. China and Russia have extended their ties through the Shanghai Cooperation Organization. This however did not prevent Russia from establishing the Eurasian Union, in a way reclaiming its sphere of influence. The Middle East, on the other hand, is in dire need of broader—albeit imaginative—regional integration.

The inability of any one power to confront global challenges will lead responsible powers into the fold of regional multilateralism. The transition will be facilitated if it builds on existing regional integration structures. This way, every state will ultimately become a stakeholder in the international system.

For that to happen, regional leaders need to operate as focal points. They need to listen, persuade and inspire insiders, while coordinating with outsiders. This process is different from the traditional spheres of influence system. It is based not on Monroe Doctrine-type of arrangements and coercion but rather on reassuring security umbrellas and mutually beneficial trade blocs.

Within this new paradigm, emerging regional leaders—such as China, Russia, India, Japan, Brazil, Turkey, and South Africa—will play a more significant role within their regions while at the same time will take part in cross regional and global issues.

The Cold War and the early post-Cold War periods were relatively easy to define and comprehend. The first was roughly the struggle between two superpowers forming a bipolar system where almost every state had to choose a side. What followed was a period described by Fukuyama as “The End of History” announcing the triumph of liberal ideas. The US was a global hegemon: selecting when to intervene, expanding NATO’s reach, and dominating international institutions. Following the 9/11 attacks unilateralism was exposed and thereafter multilateralism appeared—with its limitations. Today, “regional multilateralism” may be the next paradigm that can bring about peace, cooperation, and stability in global affairs.

The rise and fall of US hegemony during the 1990s has been documented. The Unipolar moment, a Foreign Affairs article by Charles Krauthammer, encapsulates the main point in the title. It was a moment. Once this “moment” was over, Fareed Zakaria and others have been imagining a “post-American world.”

U.S. power and its global role remain at the core of the contemporary discussion. America still is—and probably will remain for a long time - the world’s undisputed leader in military, economic and technological power. However, the politics of austerity at home and pressing realities abroad necessitate a new U.S. foreign policy. The U.S. cannot go it alone.

Indeed, the US has been refocusing its foreign policy and Obama has been using the term multilateralism repeatedly. Multilateralism is a prudent strategy for the U.S. and the international system at large, however it is incomplete. Multilateralism has reached its limits when it comes to Iran’s nuclear program, recognition of Palestine, the six party talks on North Korea, and Kosovo’s independence—to name just a few thorny issues.

In a world of diminished US involvement and unsuccessful multilateralist endeavors, an alternative vision for global engagement is necessary. Instead we are faced with a reluctant China, an unprepared India, an European Union in the midst of a financial debacle and a host of regional powers that focus on their neighborhood rather than claiming a global role. Given these realities, regional multilateralism can serve as the way out from this dead end.

Regionally, the Middle East is as explosive as ever. The Western Balkans are doubtful about their future within the European Union and may again implode. The African continent has many ongoing conflicts and even more potential ones unresolved. In Latin America at least two alternative visions for the region are competing. The Far East is actively searching ways to live with the rise of China. These and other contemporary problems can be better solved at the regional rather than the bilateral or global levels.

This context highlights the importance of regional integration and multilateralism. Regional multilateralism is building on these very ideas. Bringing these two together is necessary in today’s world. The buds of regional integration are everywhere in the making but they have not yet been clearly connected with the principles of multilateralism.

The EU serves as an example of regional integration and others are following its steps. The African Union has also stepped up its peacekeeping efforts and moved toward further economic integration. However, the quest for regional multilateralism should not be confined by a conventional understanding of geography. For instance, Russia may be a force for stability both in the Far East and Central Asia, China may have a stake in the affairs of Latin America and Africa, and different parts of what we call the Middle East may integrate with parts of Central Asia or Europe. The very prospect of Turkey joining the EU may be a sign of such developments.

Cross-regional cooperation is key to regional multilateralism. The transatlantic dialogue model between the US and Europe can and should be exported. For instance, in the Far East the U.S. and the EU both cooperate with ASEAN. China and Russia have extended their ties through the Shanghai Cooperation Organization. This however did not prevent Russia from establishing the Eurasian Union, in a way reclaiming its sphere of influence. The Middle East, on the other hand, is in dire need of broader—albeit imaginative—regional integration.

The inability of any one power to confront global challenges will lead responsible powers into the fold of regional multilateralism. The transition will be facilitated if it builds on existing regional integration structures. This way, every state will ultimately become a stakeholder in the international system.

For that to happen, regional leaders need to operate as focal points. They need to listen, persuade and inspire insiders, while coordinating with outsiders. This process is different from the traditional spheres of influence system. It is based not on Monroe Doctrine-type of arrangements and coercion but rather on reassuring security umbrellas and mutually beneficial trade blocs.

Within this new paradigm, emerging regional leaders—such as China, Russia, India, Japan, Brazil, Turkey, and South Africa—will play a more significant role within their regions while at the same time will take part in cross regional and global issues.

Interview on 02/06/2012 with Shinju Fujihira, Associate Director, Program on U.S.-Japan Relations in the Asahi Shimbun.

120206_asahi_shininterview.pdf

2011

Despite all the recent talk of “grand bargains,” little attention has been paid to the unraveling of a truly grand bargain that has been at the center of public policy in the United States for more than a century.

That bargain—which emerged in stages between the 1890s and 1930s—established an institutional framework to balance the needs of the American people with the vast inequalities of wealth and power wrought by the triumph of industrial capitalism. It originated in the widespread apprehension that the rapidly growing power of robber barons, national corporations and banks (like J.P. Morgan’s) was undermining fundamental American values and threatening democracy.

Such apprehensions were famously expressed in novelist Frank Norris’s characterization of the nation’s largest corporations—the railroads—as an “octopus” strangling farmers and small businesses. With a Christian rhetorical flourish, William Jennings Bryan denounced bankers’ insistence on a deflationary gold standard as an attempt to “crucify mankind upon a cross of gold.” A more programmatic, and radical, stance was taken by American Federation of Labor convention delegates who in 1894 advocated nationalizing all major industries and financial corporations. Hundreds of socialists were elected to office between 1880 and 1920.

Indeed, a century ago many, if not most, Americans were convinced that capitalism had to be replaced with some form of “cooperative commonwealth”—or that large corporate enterprises should be broken up or strictly regulated to ensure competition, limit the concentration of power and prevent private interests from overwhelming the public good. In the presidential election of 1912, 75 percent of the vote went to candidates who called themselves “progressive” or “socialist.”

Such views, of course, were vehemently, sometimes violently, opposed by more conservative political forces. But the political pressure from anti-capitalists, anti-monopolists, populists, progressives, working-class activists and socialists led, over time, to a truly grand bargain.

The terms were straightforward if not systematically articulated. Capitalism would endure, as would almost all large corporations. Huge railroads, banks and other enterprises—with a few exceptions—would cease to be threatened with nationalization or breakup. Moreover, the state would service and promote private business.

In exchange, the federal government adopted a series of far-reaching reforms to shield and empower citizens, safeguarding society’s democratic character. First came the regulation of business and banking to protect consumers, limit the power of individual corporations and prevent anti-competitive practices. The principle underlying measures such as the Sherman Antitrust Act (1890), the Pure Food and Drug Act (1906) and the Glass-Steagall Act (1933)—which insured bank deposits and separated investment from commercial banking—was that government was responsible for protecting society against the shortcomings of a market economy. The profit motive could not always be counted on to serve the public’s welfare.

The second prong of reform was guaranteeing workers’ right to form unions and engage in collective bargaining. The core premise of the 1914 Clayton Act and the National Labor Relations Act of 1935—born of decades of experience—was that individual workers lacked the power to protect their interests when dealing with large employers. For the most poorly paid, the federal government mandated a minimum wage and maximum hours.

The third ingredient was social insurance. Unemployment insurance (1935), Social Security (1935), and, later, Medicaid and Medicare (1965) were grounded in the recognition that citizens could not always be self-sufficient and that it was the role of government to aid those unable to fend for themselves. The unemployment-insurance program left unrestrained employers’ ability to lay off workers but recognized that those who were jobless through no fault of their own (a common occurrence in a market economy) ought to receive public support.

These measures shaped the contours of U.S. political and economic life between 1940 and 2000: They amounted to a social contract that, however imperfect, preserved the dynamism of capitalism while guarding citizens against the power imbalances and uncertainties that a competitive economy produces. Yet that bargain—with its vision of balance between private interests and public welfare, workers and employers, the wealthy and the poor—has been under attack by conservatives for decades. And the attacks have been escalating.

The regulation of business is decried now, as it was in 1880, as unwarranted interference in the workings of the market: Regulatory laws (including antitrust laws) are weakly enforced or vitiated through administrative rule-making; regulatory agencies are starved through budget cuts; Glass-Steagall was repealed, with consequences that are all too well known; and the financial institutions that spawned today’s economic crisis - by acting in the reckless manner predicted by early twentieth-century reformers—are fighting further regulation tooth and nail. Private-sector employers’ fierce attacks on unions since the 1970s contributed significantly to the sharp decline in the number of unionized workers, and many state governments are seeking to delegitimize and weaken public-sector unions. Meanwhile, the social safety net has frayed: Unemployment benefits are meager in many states and are not being extended to match the length of the downturn; Republicans are taking aim at Medicaid, Medicare, Social Security and Obamacare. The real value of the minimum wage is lower than it was in the 1970s.

These changes have happened piecemeal. But viewed collectively, it’s difficult not to see a determined campaign to dismantle a broad societal bargain that served much of the nation well for decades. To a historian, the agenda of today’s conservatives looks like a bizarre effort to return to the Gilded Age, an era with little regulation of business, no social insurance and no legal protections for workers. This agenda, moreover, calls for the destruction or weakening of institutions without acknowledging (or perhaps understanding) why they came into being.

In a democracy, of course, the ultimate check on such campaigns is the electoral system. Titans of industry may wield far more power in the economic arena than average citizens, but if all votes count equally, the citizenry can protect its core interests—and policies—through the political arena. This makes all the more worrisome recent conservative efforts to alter electoral practices and institutions. Republicans across the nation have sponsored ID requirements for voting that are far more likely to disenfranchise legitimate (and relatively unprivileged) voters than they are to prevent fraud. Last year, the Supreme Court, reversing a century of precedent, ruled that corporate funds can be used in support of political campaigns. Some Tea Partyers even want to do away with the direct election of senators, adopted in 1913. These proposals, too, seem to have roots in the Gilded Age—a period when many of the nation’s more prosperous citizens publicly proclaimed their loss of faith in universal suffrage and democracy.

Some of you knew Ted Forstmann much better than I did. Most of you knew him much longer. When Ted’s family and closest colleagues asked me to join Mayor Bloomberg and Charlie Rose in offering a eulogy to Ted, I must admit I was hesitant, not to mention humbled. What could be more presumptuous than for a British-born professor to try to do justice to one of the great American capitalists?

And then I remembered the side of Ted that I suspect relatively few of you saw. Teddy the philosopher. Teddy, my coauthor.

When I heard the news of Ted’s death—which we’d been dreading for weeks—my first thought was: he was the most American American I’ve ever known. Financier. Fun lover. Philanthropist. And a man who couldn’t abide cant—in both senses. Cant in the sense of insincere humbug. And can’t in the sense of “this can’t be done.”

And yet there was another side to Ted that was a little less classically all-American. He was, after all, a single parent. He was a man for whom the color line—for so long this country’s curse - was simply not visible.

He was also a matchmaker: a Cupid with a Gulfstream 5 instead of wings. He took a fatherly interest in my romance with Ayaan, whom he did so much to help after she was forced to leave the Netherlands, and who can’t be here for the very excellent reason that she’s about to give birth to our son. Ted was one of those people who didn’t advise her against me, and I’ll be grateful for that until the day I die.

What I really want to remember today, however, is Ted’s secret life as an intellectual. Ted was no ordinary master of the financial universe. He saw things differently. He was what the Germans call a Querdenker, which the English “lateral thinker” doesn’t quite translate.

. From the moment we met, he and I talked about his fears for this country’s financial and political system. He had shared my foreboding about the excesses of the early 2000s. And he also shared my fear that when the crisis struck, people would leap to the wrong conclusions.

In a piece we wrote together for The Wall Street Journal back in April of last year, we made an argument that I believe still holds good: that in a mood of legitimate public anger at the consequences of the crisis, this country is drawing the wrong conclusions about its causes.

Unlike many people in the financial world, Ted Forstmann was not afraid to criticize Wall Street. (It was I who had to tone down his invective.) But what Ted dreaded was that the backlash that was bound to follow the crisis would lead to precisely the hypertrophic regulation we now see emerging over literally thousands of pages - as well as to demagogic calls for redistribution via higher tax rates and expanded federal programs.

Ted was convinced that any new regulation should focus strictly on excess leverage and the derivatives markets. Those, for him, were the root causes of the crisis.

With Ronald Reagan, he also passionately believed that enlarging the government was not the answer to the problem; often, it was the problem. That was why he wanted to see more disadvantaged kids going to private schools. His ideal was social mobility, not state-mandated equality. In this, as in so many ways, Ted was very wise.

A couple of years ago, two of my kids had the privilege of having lunch with Ted at one of his favorite restaurants, Harry Cipriani, just nine blocks from here. Last weekend I asked my younger son, who’s now 12, if he remembered the conversation. He did. Ted’s advice was this: “Don’t do the obvious thing. Don’t follow in anybody’s footsteps. Look around you and figure out what’s needed, what’s missing. Then do that.”

I hope my son heeds that advice. I hope his whole generation heeds it. I know, Everest and Siya, that you will.

I admit I was surprised by my own reaction to the news of his death. My first thought was: oh, no, now I won’t be able to ask Ted what he thinks anymore. What he thinks about the economy. What he thinks about politics. I won’t be able to get his take on the presidential candidates. And suddenly I felt really bereft.

That morning I had to write a column for Newsweek. I couldn’t help myself: I just sat down and addressed it directly to him. What’s your take, Ted? As I was writing it—and boy, did the words flow—I realized just how much I am going to miss his wisdom. Because I could never predict what Ted’s take would be. To a pedestrian, risk-averse academic like me, the way he thought about the world was full of surprises—and always illuminating ones.

Ted, you were in many ways the most American of Americans. You were the quintessential doer. But you were also a thinker. And we really do miss the unique way you thought.

Wisdom is in short supply these days. You took so much with you when you left us.

Pages