Lenore G. Martin and Dan Avnon participated in an interview with CGI English to discuss US Israel ties:It is no secret that Israeli Prime Minister Benjamin Netanyahu doesn't enjoy a good relationship with US President Barack Obama. But recently, Netanyahu is taking an offensive to push Washington to get tougher on Iran during a sensitive time of upcoming US presidential election in early November. Israel is urging the US to establish a red line for Iranian nuclear research, which Western countries believe aims at a nuclear bomb. And Netanyahu appears to venture into US domestic politics by criticizing Obama administration and seemingly siding with Obama's rival Romney, who advocates a tough policy toward Iran.Listen to Interview (mp3)
Drawing on the research and experience of fifteen internationally recognized Latin America scholars, this insightful text presents an overview of inter-American relations during the first decade of the twenty-first century. This unique collection identifies broad changes in the international system that have had significant affects in the Western Hemisphere, including issues of politics and economics, the securitization of U.S. foreign policy, balancing U.S. primacy, the wider impact of the world beyond the Americas, especially the rise of China, and the complexities of relationships between neighbors.Contemporary U.S.-Latin American Relations focuses on the near-neighbors of the United States—Mexico, Cuba, the Caribbean and Central America—as well as the larger countries of South America - including Argentina, Brazil, Chile, Colombia, Peru, and Venezuela. Each chapter addresses a country’s relations with the United States, and each considers themes that are unique to that country’s bilateral relations as well as those themes that are more general to the relations of Latin America as a whole. This cohesive and accessible volume is required reading for Latin American politics students and scholars alike.
The 2012 election campaign—for Congress as well as the presidency—promises to be bitterly fought, even nasty. Leaders of both major parties, and their core constituents, believe that the stakes are exceptionally high; neither party has much trust in the goodwill or good intentions of the other; and, thanks in part to the Supreme Court, money will be flowing in torrents, some of it from undisclosed sources and much of it available for negative campaigning.
This also promises to be a close election—which is why a great deal of attention is being paid to an array of recently passed, and pending, state laws that could prevent hundreds of thousands, perhaps millions, of eligible voters from casting ballots. Several states, including Florida (once again, a battleground), have effectively closed down registration drives by organizations like the League of Women Voters, which have traditionally helped to register new voters; some states are shortening early-voting periods or prohibiting voting on the Sunday before election day; several are insisting that registrants provide documentary proof of their citizenship. Most importantly—and most visibly - roughly two dozen states have significantly tightened their identification rules for voting since 2003, and the pace of change has accelerated rapidly in the last two years. Ten states have now passed laws demanding that voters possess a current government—issued photo ID, and several others have enacted measures slightly less strict. A few more may take similar steps before November—although legal challenges could keep some of the laws from taking effect.
The new ID laws have almost invariably been sponsored—and promoted—by Republicans, who claim that they are needed to prevent fraud. (In five states, Democratic governors vetoed ID laws passed by Republican legislatures.) Often working from a template provided by the conservative American Legislative Exchange Council (ALEC), Republican state legislators have insisted that the threat of election fraud is compelling and widespread; in December 2011, the Republican National Lawyers Association (RNLA) buttressed that claim by publishing a list of reported election crimes during the last 12 years. Republicans have also maintained that a photo ID requirement is not particularly burdensome in an era when such documents are routinely needed to board an airplane or enter an office building. Public opinion polls indicate that these arguments sound reasonable to the American people, a majority of whom support the concept of photo ID requirements. The Supreme Court has taken a similar view, although it left open the possibility of reconsidering that verdict if new evidence were to emerge.Critics of these laws (myself included) have doubted both their necessity and their ability to keep elections honest. The only type of fraud that a strict photo ID rule would actually prevent is voter impersonation fraud (I go to the polls pretending to be you), and, in fact, voter impersonation fraud is exceedingly rare. In Indiana, where the Republican-dominated legislature passed one of the first new ID laws in 2005 (on a straight party-line vote), there had been no known instances of voter impersonation in the state’s history. In Texas, a strict ID law was enacted last year, although the 2008 and 2010 elections gave rise to only five formal complaints about voter impersonation (out of 13 million votes cast). “There are more UFO and Bigfoot sightings than documented cases of voter impersonation,” quipped one Texas Democrat. Close inspection of the RNLA’s inventory of election fraud, moreover, has found it to be flawed and misleading; most election experts believe that the greatest threat to election integrity comes from absentee ballots—a threat that would not be addressed by the current laws.
As importantly, the burdens placed on prospective voters by these ID requirements are not trivial. Men and women who already possess driver’s licenses or passports, of course, will be unaffected. (So too will those in Texas who have permits to carry concealed weapons—since those permits meet the ID requirement.) But citizens who lack such documents will now be obliged to assemble various other pieces of paper (birth certificates, naturalization forms, proof of residence, etc.) and make their way (presumably without a car) to a government office that can issue an official photo ID. Who are these men and women? Studies indicate that they are disproportionately young or elderly, poor, black, and Hispanic; demographically, they are more likely than not to vote Democratic. (In states covered by the Voting Rights Act, such as Texas and South Carolina, the photo ID laws are being challenged by the Department of Justice on the grounds that they disproportionately affect minorities.) The number of people potentially affected is considerable: the Texas secretary of state, for example, estimates that at least 600,000 already registered voters do not possess the documents to cast ballots in November. New York University’s respected Brennan Center for Justice has estimated that a total of more than five million people may lack the requisite identification documents in states that have passed new ID laws.
How many people will actually be prevented from casting ballots by these laws in November? What impact will these laws have on participation? The straightforward answer is that none of us (scholars, commentators, politicians) really know—because the laws are recent and measuring their impact is difficult. (We should know more after November, since several studies will be conducted during this election.) The number is unlikely to be huge, particularly since various pro-voting-rights groups (as well as the Democratic party) will work hard to help people get their ID documents. But it could certainly be large enough to affect the outcome of close races for Congress and even for the presidency.
Whether they have a decisive impact on the election or not, the ID laws—as well as other measures designed to inhibit voting—are disturbing, particularly when located against the backdrop of our extended history of conflict over the right to vote and its exercise. Although the United States has long prided itself on being a paragon of democracy, we did not possess anything even approximating universal adult suffrage until the late 1960s—even though universal suffrage is commonly regarded as an essential ingredient of democracy. It took many decades of mobilization and struggle for voting rights in all states to be extended to African Americans, women, Native Americans, and those who lacked property; at different historical moments, some states (suffrage requirements were largely a matter of state law) also excluded “paupers,” the illiterate, the non-English speaking, and those whose jobs made them too transient to meet long residency requirements.
Moreover, our history has not been one of steady and inexorable progress toward a more inclusive polity. In the very long run, to be sure, we have become more democratic, but there have been numerous moments in our past when the pendulum swung in the opposite direction: men and women who were enfranchised found themselves losing that right. This happened to African Americans in several northern states before the Civil War and in all southern states in the late nineteenth century. It also happened to women in New Jersey in the early 1800s, to men who became “paupers” because of economic downturns, to citizens who could not pay poll taxes (or pass literacy tests), and to prison inmates in Massachusetts in 2000, just 12 years ago. Suffrage rights have contracted as well as expanded.
In addition to this mottled pattern of enfranchisement and disfranchisement, our nation has also witnessed periodic episodes of “voter suppression”—a label frequently invoked by critics to characterize the current wave of photo ID requirements. “Voter suppression” differs conceptually from outright disfranchisement because it does not involve formally disqualifying entire groups of people from the polls; instead, policies or acts of “suppression” seek to prevent, or deter, eligible citizens from exercising their right to vote. Historically, voter suppression seems to arise when organized political forces aim to restrain the political participation of particular groups but cannot, politically or constitutionally, disfranchise them outright. This occurred, of course, in the post-Reconstruction South when white Democratic “redeemers” utilized a variety of techniques (ranging from violence to complex ballot arrangements to poll taxes to orally administered “understanding” tests) to circumvent the Fifteenth Amendment and keep blacks from voting. (Eventually, the suppression of the black vote in the South shaded into, and became, disfranchisement through clever legal innovations such as the all-white Democratic primary.) The phrase “vote suppression” was first widely used in the United States in the 1880s.
Legal efforts to place obstacles in the path of legitimate voters also recurred in the North between the Civil War and World War I, targeted primarily at the immigrant workers who were flooding into the country. California and New Jersey, for example, began to require that immigrants present their original, sealed naturalization papers at the polls; various states limited the hours that polling places or registration offices were open (at a time when the 10-hour work day was common), while simultaneously requiring annual registration in large cities but not in towns. In New York, in 1908, authorities sought to winnow out Jewish voters—many of whom were socialists—by designating Saturdays and Yom Kippur as registration days. Such measures were commonly justified as necessary to prevent fraud.
The recent wave of ID laws (and their cousins) bears a close resemblance to past episodes of voter suppression, particularly those of the late-nineteenth and early-twentieth centuries. The laws seem tailored less to guarantee the integrity of elections than to achieve a partisan purpose; the targeted constituencies—those directly affected by the laws—tend, once again, to be the poor, the less advantaged, or members of minority groups. It may not be a coincidence that the phrase “voter suppression”—like “vote suppression” in the 1880s - has become a prominent part of our political vocabulary during an era of large-scale immigration and in the wake of a dramatic extension of voting rights to African Americans.This is not to say—the point is important—that there is anything intrinsically wrong with a system of election administration that requires voters to present some type of ID card or photo ID at the polls. Many countries demand that voters present their national identification cards (or special voting cards) when they show up to cast their ballots. Preventing election fraud is a legitimate state function, and, as Rhode Island’s independent governor, Lincoln Chafee, recently observed while signing a new ID measure into law, asking for identification can be “a reasonable request to ensure the accuracy and integrity of our elections.” Requiring voters to present an ID need not be suppressive or discriminatory.
The devil is in the details—as is always true with laws that tap the tension between election integrity and access to the ballot box. Like many critics of the recent legislation, I could welcome a photo ID requirement—if it were made clear that it was the responsibility of the state (rather than of private citizens) to insure that every eligible man and woman possessed such documentation. Imagine, for example, a system in which any voter who arrived at the polls without an official ID could apply for one at the polling place (it could be mailed out in subsequent weeks) and then was permitted to cast a provisional ballot (which would be counted if she proved to be eligible). In time, everyone would become equipped with an appropriate ID, and meanwhile no one would be denied the opportunity to vote. (Rhode Island’s new law contains some of these elements.) Such a system would be costly, particularly at the outset, but the expense would be the price of keeping elections democratic while addressing the concerns of those worried about fraud. The state, in effect, would accept responsibility for solving the access problem that its anti-fraud measure had engendered.Alas, that does not seem to be what the sponsors of the current measures have in mind. In 2008, for example, Indiana’s state government simply tossed the access problem into the laps of individual citizens, leading to a widely publicized episode in which elderly nuns who had been voting for decades arrived at the polls but were not permitted to vote because they lacked driver’s licenses. Other states have adopted the same posture: it is up to potential voters to figure out how to navigate around the new obstacle that the state has placed in their path. As a consequence, some of those voters - perhaps thousands, perhaps hundreds of thousands—will end up being unable to cast ballots in a very important election. Whatever the numbers turn out to be, the laws themselves are unworthy of a modern, sophisticated nation that identifies itself as democratic. They are not effective policy instruments; they chip away at the core democratic value of inclusiveness; and they resonate with the worst, rather than the best, of our political traditions.
Niall Ferguson is a little concerned these days.The feeling started years ago, during one of his stints leading a course in Western civilization. “Each time I taught it, I felt I was getting closer to an original answer to the question, ‘Why did the West dominate the rest?’ plus the subordinate question, ‘Is it over?’ ”Ferguson, the Laurence A. Tisch Professor of History, believes we are witnessing the end of the predominance of the West— “Europe and North America, broadly,” he says—relative to countries like China, India, and Brazil. Much of the rest of the world has not only caught up with Western achievements; according to Ferguson, the West also has lost faith in its own civilization because of the widespread perception that its success was almost exclusively the result of violence and imperialism.So his latest book, Civilization: The West and the Rest, was mostly produced amid a mood of uneasiness, he admits. “I was worried that the West was losing sight of what made it so successful, and perhaps losing those advantages that had previously been so important.”In Civilization, Ferguson dubs these Western advantages his six “killer apps,” which are competition, science, property rights, medicine, consumerism, and the work ethic. “The prescription must be to reinstall and update these apps, to take these six things and make sure we’re doing them as well as we can,” he argues.
“I’ve spent a lot of time lately thinking about how healthy these things are in the West, and the answer is not very. While I was writing the book I also realized that what other fallen civilizations had in common was the speed with which their downfall happened. Things don’t always happen gradually in history; sometimes they fall apart quite fast. So there’s a certain urgency in my argument. If you don’t watch out, things can go wrong very rapidly. By the way, I think that’s what’s happened in Europe. The financial crisis has gone from bad to worse in the span of a year.”The candid and sometimes controversial Ferguson is especially troubled by the rampant belief that all civilizations are not only equal, but that “the West was actually bad because Western power was based exclusively on conquest and colonization.”“That self-flagellation, which has been a feature of the academe for a generation, is quite corrosive, because if you teach a generation that the West was essentially wicked and its passing shouldn’t be mourned, then your students aren’t going to feel tremendously committed to its values.”“The West, in some respects—not all—was a more successful civilization than any other because it was successful economically in making people richer than they ever were before; successful socially in creating greater opportunities, not least for women than any previous society; and successful culturally in opening up whole avenues of scientific and other inquiry that had previously been closed,” he says. “Therefore, we shouldn’t think of the West just in terms of conquest and colonization, slavery and exploitation. That’s only a part of the story. The least original thing that the West did after 1500 was empire.”Apart from his prolific writing (he’s now at work on a multivolume biography of Henry Kissinger), Ferguson makes ample time for his four children, including a new son, and for playing the double bass.The former high school punk rocker (“We had several different names, one of which was ‘The Strand’; we were closely modeled on the Jam”) traded in his six-string after discovering jazz. He still plays bass occasionally with the London-based quintet “A Night in Tunisia.”Ferguson’s sobering message will air on television this spring, when PBS screens the series ‘Civilization: Is the West History?’, which he wrote and presented. “My argument is, ‘Look, let’s identify the strengths, and let’s not pretend that in the period after 1500 something remarkable didn’t happen. There’s a reason why the West got so much richer, longer-lived, healthier, and better educated than anybody else, and it wasn’t just machine guns.’ This idea annoys some people,” he shrugs, “but that’s OK.”
Should we pay children to read books or to get good grades? Should we allow corporations to pay for the right to pollute the atmosphere? Is it ethical to pay people to test risky new drugs or to donate their organs? What about hiring mercenaries to fight our wars? Auctioning admission to elite universities? Selling citizenship to immigrants willing to pay?In What Money Can’t Buy, Michael J. Sandel takes on one of the biggest ethical questions of our time: Is there something wrong with a world in which everything is for sale? If so, how can we prevent market values from reaching into spheres of life where they don’t belong? What are the moral limits of markets?
In recent decades, market values have crowded out nonmarket norms in almost every aspect of life—medicine, education, government, law, art, sports, even family life and personal relations. Without quite realizing it, Sandel argues, we have drifted from having a market economy to being a market society. Is this where we want to be?In his New York Times bestseller Justice, Sandel showed himself to be a master at illuminating, with clarity and verve, the hard moral questions we confront in our everyday lives. Now, in What Money Can’t Buy, he provokes an essential discussion that we, in our market-driven age, need to have: What is the proper role of markets in a democratic society—and how can we protect the moral and civic goods that markets don’t honor and that money can’t buy?
Tonight both presidential candidates acknowledged the centrality of Asia to America’s interests. The Obama administration offers its “Asia pivot” as a foreign policy success story. Mitt Romney wants a bigger Navy to keep America’s commitments in the region credible and robust. According to Secretary of Defense Leon Panetta, 60 percent of the Navy’s warships will be located in the Pacific by 2020.Not once tonight did anyone talk about what those ships are going to do when they get there.Tonight’s foreign policy debate allotted less than 15 of its 90 minutes to Asia, a region with the world’s fastest economic growth rates and over half of its population. The only country receiving more than a passing mention was China, and even China was discussed only in economic terms.
How to handle the security relationship between the two countries with the world’s largest military budgets went unmentioned, as did the United States’ broader strategy in a region critical to American security interests, where the next president will have to make a series of tough choices and may well face multiple foreign policy crises.
There was no discussion of American policy on the Korean peninsula, where 28,500 American forces stand watch in a war that has not ended, and where our allies this weekend evacuated residents along the DMZ after North Korea threatened to retaliate against an activist group’s balloon launch with artillery fire.
There was no discussion of the recently announced plan to rotate more American planes, ships, and personnel through the Philippines, which in April sailed an American-made cutter into confrontation with China in disputed waters, and then suggested that America was obligated to assist in that confrontation under the terms of a 1951 mutual defense treaty.There was no discussion of Taiwan, which asked the United States for over 60 new F-16 fighters and last year got a $5.8 billion upgrade to its old ones instead—a decision that the official Chinese press called “a despicable breach of faith”—or of the American “air-sea battle” concept, generally perceived to be a template for future conflict with China.
There was no discussion of whether America’s commitment to those who call for democracy and human rights—a commitment both candidates affirmed—can or should extend past Tunisia and Tahrir to Tibet, where over 50 people have set themselves on fire without producing the political change that a single self-immolation sparked in the Arab World.
Today the world’s attention is riveted on crises in the Middle East. Tomorrow’s flash points lie in Asia. Unfortunately, tonight’s debate did little to clarify how either candidate would handle a 3 a.m. phone call that comes not from Benghazi, but from Beijing.
The killing of 16 Afghan civilians - nine of them children—by a rogue U.S. soldier is a tragedy in several senses. First, because of the loss of innocent life. Second, because the alleged perpetrator is likely someone whose psyche and spirit broke under the pressure of a prolonged counterinsurgency campaign. And third, because it was all so unnecessary.
Because Barack Obama has run a generally hawkish foreign policy, his Republican opponents don't have a lot of daylight to exploit on that issue. But if they weren't so preoccupied with sounding tough, they could go after Obama's foolish decision to escalate the war in Afghanistan back in 2009, which remains his biggest foreign policy blunder to date.A brutal reality is that counterinsurgency campaigns almost always produce atrocities. Think My Lai, Abu Ghraib, the Haditha massacre, and now this. You simply can't place soldiers in the ambiguous environment of an indigenous insurgency, where the boundary between friend and foe is exceedingly hard to discern, and not expect some of them to crack and go rogue. Even if discipline holds and mental health is preserved, a few commanders will get overzealous and order troops to cross the line between legitimate warfare and barbarism. There isn't a “nice” way to wage a counterinsurgency—no matter how often we talk about “hearts and minds”—which is why leaders ought to think long and hard before they order the military to occupy another country and try to remake its society. Or before they decide to escalate a war that is already underway.
And the sad truth is that this shameful episode would not have happened had Obama rejected the advice of his military advisors and stopped trying to remake Afghanistan from the start of his first term. Yes, I know he promised to get out of Iraq and focus on Central Asia, but no president fulfills all his campaign promises (remember how he was going to close Gitmo?) and Obama could have pulled the plug on this failed enterprise at the start. Maybe he didn't for political reasons, or because commanders like David Petraeus and Stanley McChrystal convinced him they could turn things around. Or maybe he genuinely believed that U.S. national security required an open-ended effort to remake Afghanistan.Whatever the reason, he was wrong. The sad truth is that the extra effort isn't going to produce a significantly better outcome, and the lives and money that we've spent there since 2009 are mostly wasted. That was apparent before this weekend's events, which can only make our futile task even more impossible. Here's what I wrote about this situation back in November 2009:“America's odds of winning this war are slim. The Karzai government is corrupt, incompetent and resistant to reform. The Taliban have sanctuaries in Pakistan and can hide among the local populace, making it possible for them simply to outlast us. Pakistan has backed the Afghan Taliban in the past and is not a reliable partner now. Our European allies are war-weary and looking for the exits. The more troops we send and the more we interfere in Afghan affairs, the more we look like foreign occupiers and the more resistance we will face. There is therefore little reason to expect a U.S. victory.”
It didn't take a genius to see this, and I had lots of company in voicing my doubts. It gives me no pleasure to recall it now. Indeed, I wish the critics had been proven wrong and Obama, Petraeus, McChrystal, et al. had been proven right. I concede that the situation in Afghanistan may get worse after we depart, and the more civilians will die at the hands of the Taliban, or as a consequence of renewed civil war. But the brutal fact remains: the United States can't fix that country, it is not a vital U.S. interest that we try, and we should have been gone a long time ago.
Several centuries ago, there was a nation that rose to become a world power on the strength of its innovation and its dedication to capitalist enterprise. It became a major center of trade, a financial powerhouse whose name was well known across the planet. It was blessed with an unusual society that rewarded talent and hard work, not social position—one of the few places where a person who had nothing could realistically dream of a far better life. And then this vibrant place, the envy of the world, suddenly collapsed. Its economy shrank; its people left.The place was Venice, and if it is hard to imagine the charming tourist destination was once one of the richest places on the Earth, then that is precisely what MIT economist Daron Acemoglu wants me to understand. I had come to the Sloan School of Management cafeteria, its tall windows framing the Charles River, for coffee and a discussion of his favorite topic—why nations fail.
It is a question that has intrigued people for thousands of years, but now Acemoglu and Harvard’s James Robinson offer an answer in an ambitious new book. Their theory, the fruit of a long intellectual partnership and research that digs back to the origins of agriculture, explains why some countries succeed and others do not, why some are awash in prosperity, while others are consumed by poverty and suffering. It explains how a city-state like Venice can rise to prominence, then quickly fail. And it offers a chastening message about the prospects for our own country.
So why do nations fail? Acemoglu has a one-word answer: “Politics.’’
What this means, he explains, is that nations succeed in the long term when they are able to share power broadly. They either develop inclusive institutions or “extractive institutions,’’ designed to plunder wealth for the few.
Throughout history, says Acemoglu, “the great struggle is between the masses and elites who seek to capture the government and put it to their own uses.’’
It is a less obvious answer, with more surprising implications, than is immediately apparent.To begin with, consider the factors that the two reject. Geography, for example, has long been a favorite explanation for the success of nations. Some places are blessed with natural advantages, while others are not. Certainly, sitting on coveted goods brings great wealth: witness Saudi Arabia or, for that matter, Russia. But over the long reach of history, geography fails to explain which nations have staying power. One can make a convincing list of all the geographical benefits that have accrued to the United States, but when Europeans first arrived in the 15th century, it was South America, not North, that was rich and (relatively) thickly settled.
I’m reading a fascinating new book called Why Nations Fail. The more you read it, the more you appreciate what a fool’s errand we’re on in Afghanistan and how much we need to totally revamp our whole foreign aid strategy. But most intriguing are the warning flares the authors put up about both America and China. Co-authored by the MIT economist Daron Acemoglu and the Harvard political scientist James A. Robinson, “Why Nations Fail” argues that the key differentiator between countries is “institutions.” Nations thrive when they develop “inclusive” political and economic institutions, and they fail when those institutions become “extractive” and concentrate power and opportunity in the hands of only a few.
“Inclusive economic institutions that enforce property rights, create a level playing field, and encourage investments in new technologies and skills are more conducive to economic growth than extractive economic institutions that are structured to extract resources from the many by the few,” they write.
“Inclusive economic institutions, are in turn supported by, and support, inclusive political institutions,” which “distribute political power widely in a pluralistic manner and are able to achieve some amount of political centralization so as to establish law and order, the foundations of secure property rights, and an inclusive market economy.” Conversely, extractive political institutions that concentrate power in the hands of a few reinforce extractive economic institutions to hold power.
Acemoglu explained in an interview that their core point is that countries thrive when they build political and economic institutions that “unleash,” empower and protect the full potential of each citizen to innovate, invest and develop. Compare how well Eastern Europe has done since the fall of communism with post-Soviet states like Georgia or Uzbekistan, or Israel versus the Arab states, or Kurdistan versus the rest of Iraq. It’s all in the institutions.
The lesson of history, the authors argue, is that you can’t get your economics right if you don’t get your politics right, which is why they don’t buy the notion that China has found the magic formula for combining political control and economic growth.
“Our analysis,” says Acemoglu, “is that China is experiencing growth under extractive institutions — under the authoritarian grip of the Communist Party, which has been able to monopolize power and mobilize resources at a scale that has allowed for a burst of economic growth starting from a very low base,” but it’s not sustainable because it doesn’t foster the degree of “creative destruction” that is so vital for innovation and higher incomes.
“Sustained economic growth requires innovation,” the authors write, “and innovation cannot be decoupled from creative destruction, which replaces the old with the new in the economic realm and also destabilizes established power relations in politics.”
“Unless China makes the transition to an economy based on creative destruction, its growth will not last,” argues Acemoglu. But can you imagine a 20-year-old college dropout in China being allowed to start a company that challenges a whole sector of state-owned Chinese companies funded by state-owned banks? he asks.
The post-9/11 view that what ailed the Arab world and Afghanistan was a lack of democracy was not wrong, said Acemoglu. What was wrong was thinking that we could easily export it. Democratic change, to be sustainable, has to emerge from grassroots movements, “but that does not mean there is nothing we can do,” he adds.
For instance, we should be transitioning away from military aid to regimes like Egypt and focusing instead on enabling more sectors of that society to have a say in politics. Right now, I’d argue, our foreign aid to Egypt, Pakistan and Afghanistan is really a ransom we pay their elites not to engage in bad behavior. We need to turn it into bait.
Acemoglu suggests that instead of giving Cairo another $1.3 billion in military aid that only reinforces part of the elite, we should insist that Egypt establish a committee representing all sectors of its society that would tell us which institutions—schools, hospitals—they want foreign aid to go to, and have to develop appropriate proposals.
If we’re going to give money, “let’s use it to force them to open up the table and to strengthen the grass-roots,” says Acemoglu.
We can only be a force multiplier. Where you have grass-roots movements that want to build inclusive institutions, we can enhance them. But we can’t create or substitute for them. Worse, in Afghanistan and many Arab states, our policies have often discouraged grass-roots from emerging by our siding with convenient strongmen. So there’s nothing to multiply. If you multiply zero by 100, you still get zero.
And America? Acemoglu worries that our huge growth in economic inequality is undermining the inclusiveness of America’s institutions, too. “The real problem is that economic inequality, when it becomes this large, translates into political inequality.” When one person can write a check to finance your whole campaign, how inclusive will you be as an elected official to listen to competing voices?
Is it culture, the weather, geography? Perhaps ignorance of what the right policies are?Simply, no. None of these factors is either definitive or destiny. Otherwise, how to explain why Botswana has become one of the fastest growing countries in the world, while other African nations, such as Zimbabwe, the Congo, and Sierra Leone, are mired in poverty and violence? Daron Acemoglu and James Robinson conclusively show that it is man-made political and economic institutions that underlie economic success (or lack of it). Korea, to take just one of their fascinating examples, is a remarkably homogeneous nation, yet the people of North Korea are among the poorest on earth while their brothers and sisters in South Korea are among the richest. The south forged a society that created incentives, rewarded innovation, and allowed everyone to participate in economic opportunities. The economic success thus spurred was sustained because the government became accountable and responsive to citizens and the great mass of people. Sadly, the people of the north have endured decades of famine, political repression, and very different economic institutions—with no end in sight. The differences between the Koreas is due to the politics that created these completely different institutional trajectories.
Based on fifteen years of original research Acemoglu and Robinson marshall extraordinary historical evidence from the Roman Empire, the Mayan city-states, medieval Venice, the Soviet Union, Latin America, England, Europe, the United States, and Africa to build a new theory of political economy with great relevance for the big questions of today, including:
China has built an authoritarian growth machine. Will it continue to grow at such high speed and overwhelm the West?
Are America’s best days behind it? Are we moving from a virtuous circle in which efforts by elites to aggrandize power are resisted to a vicious one that enriches and empowers a small minority?
What is the most effective way to help move billions of people from the rut of poverty to prosperity? More philanthropy from the wealthy nations of the West? Or learning the hard-won lessons of Acemoglu and Robinson’s breakthrough ideas on the interplay between inclusive political and economic institutions?
Why Nations Fail will change the way you look at—and understand—the world.
The rich world’s troubles and inequalities have been making headlines for some time now. Yet a more important story for human welfare is the persistence of yawning gaps between the world’s haves and have-nots. Adjusted for purchasing power, the average American income is 50 times that of a typical Afghan and 100 times that of a Zimbabwean. Despite two centuries of economic growth, over a billion people remain in dire poverty.
This conundrum demands ambitious answers. In the late 1990s Jared Diamond and David Landes tackled head-on the most vexing questions: why did Europe discover modern economic growth and why is its spread so limited? Now, Daron Acemoglu, an economist at MIT, and James Robinson, professor of government at Harvard, follow in their footsteps with “Why Nations Fail”. They spurn the cultural and geographic stories of their forebears in favour of an approach rooted solely in institutional economics, which studies the impact of political environments on economic outcomes. Neither culture nor geography can explain gaps between neighbouring American and Mexican cities, they argue, to say nothing of disparities between North and South Korea.They offer instead a striking diagnosis: some governments get it wrong on purpose. Amid weak and accommodating institutions, there is little to discourage a leader from looting. Such environments channel society’s output towards a parasitic elite, discouraging investment and innovation. Extractive institutions are the historical norm. Inclusive institutions protect individual rights and encourage investment and effort. Where inclusive governments emerge, great wealth follows.
Britain, wellspring of the industrial revolution, is the chief proof of this theory. Small medieval differences in the absolutism of English and Spanish monarchs were amplified by historical chance. When European exploration began, Britain’s more constrained crown left trade in the hands of privateers, whereas Spain favoured state control of ocean commerce. The New World’s riches solidified Spanish tyranny but nurtured a merchant elite in Britain. Its members helped to tilt the scales against monarchy in the Glorious Revolution of 1688 and counterbalanced the landed aristocracy, securing pluralism and sowing the seeds of economic growth. Within a system robust enough to tolerate creative destruction, British ingenuity (not so different from French or Chinese inventiveness) was free to flourish.
This fortunate accident was not easily replicated. In Central and South America European explorers found dense populations ripe for plundering. They built suitably exploitative states. Britain’s North American colonies, by contrast, made poor ground for extractive institutions; indigenous populations were too dispersed to enslave. Colonial governors used market incentives to motivate early settlers in Virginia and Massachusetts. Political reforms made the grant of economic rights credible. Where pluralism took root, American industry and wealth bloomed. Where it lapsed, in southern slaveholding colonies, a long period of economic backwardness resulted. A century after the American civil war the segregated South remained poor.Extractive rules are self-reinforcing. In the Spanish New World, plunder further empowered the elite. Revolution and independence rarely provide escape from this tyranny. New leadership is tempted to retain the benefits of the old system. Inclusive economies, by contrast, encourage innovation and new blood. This destabilises existing industries, keeping economic and political power dispersed.Failure is the rule. Here, Venice provides a cautionary tale. Upward mobility drove the city-state’s wealth and power. Its innovative commenda, a partnership in which capital-poor sailors and rich Venetians shared the profits from voyages, allowed those of modest background to rise through the ranks. This fluidity threatened established wealth, however. From the late 13th century the ducal council began restricting political and economic rights, banning the commenda and nationalising trade. By 1500, with a stagnant economy and falling population, Venice’s descent from great power was well under way.
Moves towards greater inclusivity are disappointingly rare. The French revolution provides an example, but also demonstrates the authors’ unfortunate habit of ignoring historical detail. Revolution put paid to absolutism and led, after a long and messy struggle, to the creation of an enduring republic. Institutions, in the form of a fledgling merchant class, provided momentum for reform, making the difference between the successful French revolution and failed uprisings elsewhere. But the authors give short shrift to the presence and meaning of Enlightenment ideals. It is difficult to believe this did not matter for the French transition, yet the intellectual climate is left out of the story. History is contingent, the authors apologise, but history is what they hope to explain.
The story of Botswana is also unsatisfying. There, a co-operative effort by tribal leaders secured the protection of the British government against the marauding imperialism of Cecil Rhodes. Despite its considerable diamond wealth, which might have spawned a corrupt and abusive elite, Botswana became a rare success in Africa, assisted by the benevolence of its leaders and by having a tiny population. At times the authors come dangerously close to attributing success to successfulness.The intuition behind the theory is nonetheless compelling, which makes the scarcity of policy prescriptions frustrating. The book is sceptical of the Chinese model. China’s growth may be rooted in the removal of highly oppressive Maoist institutions, but its communist government remains fundamentally extractive. It may engineer growth by mobilising people and resources from low-productivity activities, like subsistence agriculture, toward industry. But without political reform and the possibility of creative destruction, growth will grind to a halt.
Rich countries determined to nudge along the process of institutional development should recognise their limitations, the authors reckon. The point is well taken. It is hard to ignore the role of European expansion in the creation of the underdeveloped world’s extractive institutions which, in self-perpetuating fashion, continue to constrain reform and development. Evidence nonetheless hints that contagious ideals, propitious leadership and external pressure matter. The promise of European Union membership encouraged institutional reform in central and eastern Europe. America eventually eradicated extractive southern institutions and placed the South on a path toward economic convergence. There is no quick fix for institutional weakness, only the possibility that steady encouragement and chance will bring about progress.
It’s a paradox. The economy is in the doldrums. Yet the incumbent is ahead in the polls. According to a huge body of research by political scientists, this is not supposed to happen. On the other side of the Atlantic, it hardly ever does. But in America today, the law of political gravity has been suspended.
First, the economy. It’s growing at a lousy two percent. Unemployment is stuck above 8 percent. Manufacturing just contracted for the third straight month. Consumer confidence is sliding. Nearly 47 million Americans are on food stamps. And we’re heading for a fiscal cliff.
Now, the polls. According to the New York Times, President Obama is set to win 51 percent of the popular vote and 311 electoral college votes, including those of key swing states like Colorado, Florida, Iowa, New Hampshire, Nevada, Ohio, Virginia, and Wisconsin. He has a 3 in 4 chance of being reelected.If Mitt Romney were the kind of guy people felt sorry for, you’d feel sorry for him.
So what’s the explanation? I can think of four possibilities.
Explanation one: I am lying to you. The economy is doing great. No doubt the self-appointed “fact checkers” of the blogosphere are armed and ready to tell you this. (Did I forget to mention that the fiscal cliff is made of green cheese?)
Explanation two: People aren’t telling the truth to the pollsters. The deciding factor in this election will be whether or not a relatively small slice of the electorate - suburban, middle-class voters in a handful of states - deserts the president. Four years ago, as Michael Barone has pointed out, many such people voted for him. Now they are suffering from buyer’s remorse. But there is a certain stigma attached to voting against the man who came to personify not just political change but the end of centuries of racial prejudice. So when asked by pollsters, the swing voters simply don’t fess up.
A variant of this argument is that people currently telling pollsters they’d vote for the president tomorrow won’t actually turn out on Election Day. This seems to me a more likely scenario. Young people and African-Americans turned out in unusually high numbers four years ago. Precisely these groups have fared the worst in the sluggish economy of the past four years. Sure, they’ll never vote for Mitt Romney. But these disillusioned folks may just stay home “staring up at fading Obama posters,” in Paul Ryan’s memorable phrase.
Explanation three: People vote more prospectively than retrospectively. “Are you better off today than you were four years ago?” was the question Ronald Reagan asked voters back in 1980. It’s the question Republicans started asking again last month, and for a moment the Democratic spin-doctors didn’t have a good answer. It took Roger Altman (one of the president’s dwindling band of supporters on Wall Street) to come up with one. Sure, things have been bad - but they are about to get better as housing bounces back and the United States fracks its way to energy independence. So the real question voters should ask themselves is: “Will I be better off in four years’ time than I am right now?”
Explanation four: The economy isn’t the No. 1 issue, despite what people say. The more I watch of this election, the more I incline toward this last explanation.
True, when asked to rank issues, voters mostly put the economy at the top of the list. And yet when asked to make a choice between Barack Obama and Mitt Romney, their choices don’t seem to be economically based.
Many people subscribe to the view that Romney just isn’t likable. They can more readily imagine having a beer or shooting hoops with Obama. Then there is the religious subtext: Mitt Romney’s Mormonism is just a bit weird, whereas Obama’s Evangelicalism Lite offends hardly anyone.
And let’s not forget abortion. For many women, the suspicion that banning abortion, if not contraception too, would be item No. 1 on the Romney-Ryan to-do list trumps all other considerations. The Obama campaign played this card with great success over the summer, with more than a little help from Rep. Todd Akin.
Or maybe, just maybe, this election is boiling down to a contest between white non-Hispanic men and everyone else. After all the high hopes of 2008, it will be depressing if that is the outcome of the Obama presidency: an electorate split along the dividing lines of race and sex.One thing’s for sure. Though Bill Clinton waxed lyrical last week about his party’s job-creation record, this time it really isn’t the economy, stupid.
In December 2010, the self-immolation of a Tunisian fruit vendor sparked what has come to be termed the “Arab Spring.” What first appeared as an isolated act of protest against local authorities quickly gained broader significance, as it was followed by a series of demonstrations that has shaken the grip of autocratic regimes across the Arab world. A year later, three longstanding dictators - Zine El Abidine Ben Ali of Tunisia, Hosni Mubarak of Egypt, and Muammar el-Qaddafi of Libya - have been ousted, after varying degrees of violence. Syria, Yemen, and Bahrain have all witnessed extensive turmoil, raising serious questions about the ahrain have all witnessed extensive turmoil, raising serious questions about the
legitimacy and survival of their rulers. Elsewhere, the political leaders of Morocco, Algeria, and Jordan have also been pressured into enacting reforms to try to assuage public demands.
It is obvious that the Turkish foreign minister Ahmet Davutoglu’s “zero problems with the neighbors” policy no longer works, in the face of Turkey’s support for the Syrian defectors who oppose the Assad regime. The foreign minister must now deal with potentially hostile reactions by Syria and its closest ally, Iran, that could have destabilizing regional implications. Iran, for one, cannot afford to allow the Assad regime to fail. It provides Iran with a foothold in the Levant from which to support Hezbollah and threaten Israel on its Lebanese border.
Syrian and Iranian retaliation against Turkey can readily take the form of support for the Kurdistan Worker’s Party, or P.K.K. This group once again has become increasingly violent in its promotion of Kurdish separatism in the Turkish southeast. Syria, Iran and Turkey share a common cause in resisting demands by Kurdish opposition movements in their countries. Only months ago, all three were cooperating in suppressing the P.K.K. For Turkey, this was a welcome change from the 1990s, when Syria and Iran supported the P.K.K. in order to pressure Ankara for foreign policy concessions. Now Damascus and Tehran could again play the P.K.K. card.
To counteract potential Syrian and Iranian subversion and the separatist appeals of the P.K.K., Turkey needs to adapt its zero problems policy to its own southeast. In 2009, the prime minister, Recep Tayyip Erdogan announced a “Kurdish opening”—a bid at reconciliation with Turkey’s Kurds. However, he quickly closed it, leaving many Kurdish demands for economic development, political rights and cultural recognition unanswered. The Turkish foreign minister’s recent veiled threat to send troops across the Syrian border may be insufficient to deter Syria and Iran from subversively supporting the P.K.K. For a comprehensive resolution of the “Kurdish question,” Ankara also needs to implement effective policies that will over the long term improve the economic, political, and cultural life of Turkey’s Kurds.
As the world struggles to emerge from the greatest financial crisis since the Depression, the institution at the heart of the global economic system is facing a profound crisis of governance. Since the International Monetary Fund’s inception at the end of World War II, Europe and the United States have dominated decision-making. Incredibly, and possibly dangerously, decisions are now being made to keep the backward-looking status quo for at least another five years.True, the final stage of the race for the top job at the I.M.F. still offers the possibility that a Mexican candidate might beat out the French front-runner. Unfortunately, with Europe still controlling an excessive voting share, the outcome has all the suspense of a Soviet-era election. Worse, the I.M.F. board does not seem to feel the need to establish even a pretext of legitimacy for the powerful No. 2 position; everyone takes for granted that the board will rubber-stamp whomever the Obama administration nominates.In a world where markets already pay more attention to what happens in China than in Europe, and where loans from emerging economies are keeping the debt-challenged United States economy on life support, the I.M.F.’s outdated governance practices have become an accident waiting to happen. The I.M.F. has long been the last line of defense in emerging-market debt crises, combining big short-term loans with technical assistance that has proven effective far more often than not. Today it is on the front lines of the European debt crisis, with Greece, Ireland and Portugal teetering on the brink. Given Japan’s huge debts and demographic implosion, and China’s runaway growth boom, it is not hard to imagine a vast I.M.F. program in Asia in the next decade. Even the United States is a potential customer if it continues for another 10 or 15 years to neglect its soaring debt burden.If the fast-growing economies of Asia and Latin America feel disenfranchised from the I.M.F. — there is still a strong undercurrent of hostility in Asia over the fund’s handling of the 1997-98 Asian financial crisis — it will be difficult for the I.M.F. to raise money to deal with Europe and potentially Japan and to credibly do its work in emerging markets now and in the future. And because American and European leaders do not want to hear when their monetary, fiscal or regulatory policies are out of whack, the I.M.F. is really the only strong voice that can deliver the message; a non-European is best-equipped to deliver it.Until a few weeks ago, everyone seemed to agree that it was high time for a change. The presumption was that the I.M.F. board would choose its next managing director from the handful of supremely qualified candidates from emerging markets, thereby strengthening its claim to be a truly global institution. The incumbent, Dominique Strauss-Kahn of France, was on record supporting a transparent, merit-based approach for choosing his successor. Given the prestige he had amassed leading the I.M.F. during the crisis, it was assumed that he would use his influence to shepherd in the new era.Everything changed in mid-May. Mr. Strauss-Kahn was forced to resign after being accused of sexually assaulting a hotel housekeeper. Suddenly, the I.M.F. became tabloid fodder and the plans for an open and meritocratic selection process were tossed out the window. With the I.M.F.’s legitimacy now under unexpected attack on a second front, gender inequality, European leaders inventively coalesced around the French finance minister, Christine Lagarde.Just a short while ago, the fact that Ms. Lagarde is French would surely have been disqualifying, given that the French have held the I.M.F. leadership for most of the last three decades. Ms. Lagarde’s training as a lawyer, rather than as an economist, might also have been an obstacle. The head of the I.M.F. is like the head of a central bank, and is frequently confronted with difficult judgments on the sizing and timing of debt programs, not to mention on monetary policy and regulation. Ms. Lagarde has provided a strong and clear voice on the need for dramatic financial sector reform. But weighed against Mexico’s candidate, Agustín G. Carstens, she might have come up short, at least prior to the Strauss-Kahn debacle. Mr. Carstens, who has a Ph.D. from the University of Chicago, has a golden C.V. for the job. The head of the I.M.F. routinely deals with central bankers as well as finance ministers, and Mr. Carstens had held both positions in Mexico. He has also served as a deputy managing director of the I.M.F. and knows the institution inside and out.Mr. Carstens has rightly argued that a European is going to be hugely conflicted in managing the central challenge facing the I.M.F. today: Europe. Soon, the I.M.F. will likely have to help manage government debt defaults in more than one European nation, starting with Greece. European leaders want to kick the can down the road by bribing the Greeks with more loans to prevent them from defaulting. This is where the I.M.F. normally preaches tough love.The I.M.F. board has given itself until June 30 to decide. The circumstances of Mr. Strauss-Kahn’s departure have to be taken into consideration, and the fallout on gender issues is not over. There has never been a woman as head of a major multilateral lending institution, and Ms. Lagarde is a highly credible candidate. It seems a done deal, though perhaps there is some way to cap the length of her tenure and improve the selection process next time.And the managing director is not the only position that matters. At the end of August, John P. Lipsky, the first deputy managing director, who was named to the job by the Bush administration, is due to step down. Why not see if one of the top emerging-market candidates can be a replacement? An effective No. 2 would also be well-positioned to take over when Ms. Lagarde herself steps down. (The last three I.M.F. managing directors have departed without completing their terms.)There is still time to set in place a merit-based selection process that could eventually form the basis for filling the top job. The I.M.F. may be a poorly understood institution, but it does not have to be a poorly governed one.
It has been a rotten economic decade for the United States. Why—and can anything be done to keep the stagnant new normal from persisting? In Lost Decades: The Making of America’s Debt Crisis and the Long Recovery (W.W. Norton, $26.95), Menzie D. Chinn ’84, of the University of Wisconsin, and Stanfield professor of international peace Jeffry A. Frieden advance a macroeconomic account of these woes, and a (difficult) path away from them. In the preface, they ask, “What happened?” and answer thus:
The United States borrowed and spent itself into a foreign debt crisis. Between 2001 and 2007, Americans borrowed trillions of dollars from abroad. The federal government borrowed to finance its budget deficit; households borrowed to allow them to consume beyond their means. As money flooded in from abroad, Americans spent some of it on hard goods, especially on cheap imports. They spent most of the rest on local goods and services, especially financial services and real estate. The result was a broad-based economic expansion. This expansion—especially in housing—eventually became a boom, then a bubble. The bubble burst, with disastrous effect, and the country was left to pick up the pieces.
The American economic disaster is simply the most recent example of a “capital flow cycle,” in which capital floods into a country, stimulates an economic boom, encourages high-flying financial and other activities, and eventually culminates in a crash. In broad outlines, the cycle describes the developing-country debt crisis of the early 1980s, the Mexican crisis of 1994, the East Asian crisis of 1997-1998, the Russian and Brazilian and Turkish and Argentine crises of the late 1990s and into 2000-2001—and, in fact, the German crisis of the early 1930s and the American crisis of the early 1890s.…
To be sure, the most recent American version of a debt crisis was replete with its own particularities: an alphabet soup of bewildering new financial instruments, a myriad of regulatory complications, an unprecedented speed of contagion. Yet for all the unique features of contemporary events, in its essence this was a debt crisis. Its origins and course are of a piece with hundreds of episodes in the modern international economy.
For a century American policymakers and their allies in the commanding heights of the international financial system warned governments of the risks of excessive borrowing, unproductive spending, foolish tax policies, and unwarranted speculation. Then, in less than a decade, the United States proceeded to demonstrate precisely why such warnings were valid, pursuing virtually every dangerous policy it had advised others against…
The American crisis immediately spread to the rest of the international economy. The world learned a valuable lesson about global markets: they transmit bad news as quickly as good news. The American borrowing binge had pulled much of the world along with it—drawing some countries (Great Britain, Ireland, Iceland, Spain, Greece) into a similar debt-financed boom, and tapping other countries (China, Japan, Saudi Arabia, Germany) for the money to make it possible. The collapse dragged financial markets everywhere over a cliff in a matter of weeks, with broad economic activity following within months.
America's last 10 years might be called “The Decade the Locusts Ate.’’ A nation
that started with a credible claim to lead a second American century
lost its way after the terrorist attacks of Sept. 11, 2001. Whether the
nation will continue on a path of decline, or, alternatively, find our
way to recovery and renewal, is uncertain.The nation began the
decade with a growing fiscal surplus and ended with a deficit so
uncontrolled that its AAA credit rating was downgraded for the first
time in its history. Ten years on, Americans’ confidence in our country
and the promise of the American Dream is lower than at any point in
memory. The indispensable superpower that entered the decade as the most
respected nation in the world has seen its standing plummet. Seven out
of every 10 Americans say that the United States is worse off today than
it was a decade ago. While many of the factors that contributed to
these developments were evident before 9/11, this unprecedented reversal
pivots on that tragic day - and the choices made in response to it.
Those choices had costs: the inescapable costs of the attack, the chosen
costs, and the opportunity costs.Inescapable
costs of 9/11 must be counted first in the 3,000 innocent lives
extinguished that morning. In addition, the collapse of the World Trade
Center and part of the Pentagon destroyed $30 billion of property. The
Dow plunged, erasing $1.2 trillion in value. Psychologically, the
assault punctured the “security bubble’’ in which most Americans
imagined they lived securely. Today, 80 percent of Americans expect
another major terrorist attack on the homeland in the next decade.Were
this the sum of the matter, 9/11 would stand as a day of infamy, but
not as an historic turning point. Huge as these directs costs are, they
pale in comparison to costs of choices the United States made in
response to 9/11: about how to defend America; where to fight Al Qaeda;
whether to attack Iraq (or Iran or North Korea) on grounds that they had
chemical or biological weapons that could be transferred to Al Qaeda;
and whether to pay for these choices by taxing the current generation,
or borrowing from China and other lenders, leaving the bills to the next
generation.Unquestionably, much of what was done to protect
citizens at home and to fight Al Qaeda abroad has made America safer. It
is no accident that the United States has not suffered further
megaterrorist attacks. The remarkable intelligence and Special Forces
capabilities demonstrated in the operation that killed Osama bin Laden
suggest how far we have come.But
the central storyline of the decade focuses on two choices made
by President George W. Bush—his decision to go to war with Iraq and his
commitment to cut taxes, especially for wealthy Americans, and thus not
to pay for the wars in Iraq and Afghanistan.The cost of his
decision to go to war with Iraq is measured in 4,478 American deaths,
40,000 Americans gravely wounded, and a monetary cost of $2 trillion.Bush
justified his decision to attack Iraq on the grounds that Saddam
Hussein might arm terrorists with weapons of mass destruction, arguing
that “19 hijackers armed by Saddam Hussein… could bring a day of horror
like none we have ever known.’’ In retrospect, even Bush supporters
agree that we went to war on false premises—since we now know that
Saddam had no chemical or biological weapons.Suppose,
however, that chemical weapons had been found in Iraq. Would that have
made Bush’s choice a wise decision? What about the many other states
that had chemical or biological weapons that could have been transferred
to Al Qaeda, for example Libya, or Syria, or Iran? What about the state
that unquestionably had an advanced nuclear weapons program, North
Korea, which took advantage of the US preoccupation with Iraq to develop
an arsenal of nuclear weapons and conduct its first nuclear weapons
test?As for cutting taxes for the wealthy, Bush’s decision left
the nation with a widening gap between government revenues and its
expenditures. Brute facts are hard to ignore: having entered office with
a budgetary surplus that the CBO projected would total $3.5 trillion
through 2008, Bush left office with an annual deficit of over $1
trillion that the CBO projected would grow to $3 trillion over the next
decade.Finally, and most difficult to assess, are opportunity
costs, what could be Robert Frost’s “road not taken.’’ In the immediate
aftermath of 9/11, the United States was the object of overwhelming
international sympathy and solidarity. The leading French newspaper
declared: “We are all Americans.’’ Citizens united behind their
commander in chief, giving him license to do virtually anything he could
plausibly argue would defend us against future attacks.This rare
combination of readiness to sacrifice at home plus solidarity abroad
sparked imagination. Would Americans have willingly paid a “terrorist
tax’’ on gas that could kick what Bush rightly called America’s “oil
addiction’’? Could an international campaign against nuclear terrorism
or megaterrorism have bent trend lines that leave Americans and the
world increasingly vulnerable to future biological or nuclear terrorist
attacks? What impact could $2 trillion invested in new technologies have
had on American competitiveness?That such a decade leaves
Americans increasingly pessimistic about ourselves and our future is not
surprising. American history, however, is a story of recurring,
impending catastrophes from which there is no apparent escape—followed
by miraculous recoveries. At one of our darkest hours in 1776 when
defeat at the hands of the British occupying Boston seemed almost
certain, the general commanding American forces, George Washington,
observed: “Perseverance and spirit have done wonders in all ages.’’
Three weeks of peaceful street protests; a couple of Panhellenic Socialist Movement (PASOK) members of parliament resigning this week; a few more PASOK members of parliament challenging the leadership qualities of Greek Prime Minister George Papandreou; rampant unemployment; violent clashes with the police; and one of the worst financial crises in modern Greek history culminated today in...a cabinet reshuffle.
Prime Minister Papandreou is facing the most intense criticism since his election in October of 2009, both from his party and from Greek society. What on Wednesday night looked like a grand coalition government with the main opposition party, Nea Demokratia, was transformed on Thursday into an intra-party “reshuffling for elections.”
The new government was sworn in on June 17 and will be up for a confidence vote on June 21. The opposition parties are not impressed with the reshuffle. Most citizens reacted by saying “same old, same old.”
Not much is expected from this new government. Why is that? To begin with, Papandreou's effort to regain the confidence of the Greek public began with the ambitious idea of a coalition government including many technocrats but ended up with a mild cabinet reshuffling satisfying the narrow interests of the ruling political party rather than effectively tackling the mounting problems.
For example, his efforts to recruit Lucas Papademos, an experienced economist that has served as vice president of the European Central Bank, as a Minister of Finance did not bear fruit. This is just one example of the failure of Papandreou to bring technocrats into the government. Instead, Evangelos Venizelos, a professor of constitutional law and until today defense minister, took up the burden.
Moreover, Theodoros Pangalos remained deputy prime minister despite the fact that he has been the target of most of the chants of the street protesters for the past three weeks. Most ministers were not changed and three important ministers were demoted but not fired—the Ministers of Finance, Interior, and Justice. However, there is a more positive way to read the news. Papandreou managed to build a team that agrees with him, to improve the internal cohesion of the party, and to share the burden with the rest of PASOK.
One step was to remove Katseli, who was probably a victim of her disagreements with the Troika (European Central Bank, IMF, European Commission), from the Ministry of Labor and Social Security. To appease the political base of PASOK and silence a wave of internal criticism that has been mounting within his party he removed from the government some of his close friends that had been intensely criticized and included some of his personal critics in the government. Last but not least, by promoting Venizelos—his party rival and contestant for the leadership of the party just a few years ago—to deputy Prime Minister. Adding a second deputy Prime Minister position for Venizelos, Papandreou significantly changed the dynamic within PASOK.
Party cohesion is a arguably a precondition for the government to pass the new bundle of austerity measures required to secure more loans from the EU/IMF. Despite these cooptation tactics, however, the new government has already found its critics from within the party. A few minutes after the new government was sworn in, PASOK MP Voudouris argued that the reshuffle was unsatisfactory. Regardless, as a result of this reshuffle, the whole political party is seen as an “accomplice” of Prime Minister Papandreou in this effort.
There are also important changes in the functioning of the government. The Prime Minister re-created a “Government Committee”—something that has been a demand of many party members—where the most important policies are normally decided. The irony is that it is both oversized, with ten Ministers participating, and lacks the key Ministers of Foreign Affairs and Defense.
These changes aim to enhance Papandreou’s ability to delegate responsibility and for the government to coordinate more efficiently. Another important fact is that Pangalos will not be part of the “Government Committee” — something that might appease some of his many critics.
Turning to the Ministry of Finance—the hot potato of this affair—most people believe that Venizelos may be better in the negotiations than the previous Minister of Finance, Papaconstantinou. Venizelos is an experienced politician and charismatic speaker. He has served as minister of culture, justice, transportation, and development. Nevertheless, he is not an economist and thus he will have to rely on the advice of others.
Finally, two promising new faces in the government are Stavros Lambrinidis, (BA from Amherst, JD from Yale), the new Minister of Foreign Affairs, and LSE Professor Elias Mossialos, the new government spokesman and Minister of State.
In the meantime, this Sunday the Eurogroup is meeting in Brussels to decide on the next installment from the EU/IMF bailout package. It seems that the developments in Greece have also alarmed Sarkozy and Merkel to the point that they rushed to declare that they will provide further assistance to Greece and that the private sector can also participate in this scheme on voluntary basis — a highly contested point so far.
Nevertheless, with few exceptions, the changes have not impressed the Greek people—who are still waiting for social justice, more just redistribution, and have grown impatient with political parties— and it is unlikely that they will restore the confidence of our foreign creditors.
If this new government fails to regain the confidence of the people then we will have early elections. And one thing is certain. From these elections a one party government will not emerge.
Call it reckless, call it bold, but the Greek Prime Minister, George Papandreou, has attempted to transform a referendum on the European Union bailout plan for Greece into a referendum about whether the Greeks want to stay in the Eurozone or not. The last time Greece had a popular referendum was in 1974 to decide if the people wanted to keep King Constantine, a descendent of the Royal family that European Powers foisted on the Greek people in the 1860s.
This time around, the Greek Prime Minister has shocked the rest of Europe—and even his own Vice President—with his plans to call for a popular vote on whether to accept the 50% haircut deal that EU heads of state agreed on last week to manage the country’s spiraling debt crisis. It’s the latest in a series of Hail Mary passes by Papandreou to keep his hold on power, but the proposed referendum is really only a distraction from the no-confidence vote he faces, which is scheduled in Greek Parliament this Friday. As hard as the Europeans leaders may have fought to prevent a Greek default, they failed to take into account the dire state of domestic Greek politics. But even at this moment the solution to the crisis must be a European one.
The gravest threat facing Papandreou right now is from the Greek people. His government party, PASOK, was elected two years ago on an anti-austerity platform, but has since been forced into the position of calling for more austerity than any Greek government in the postwar era. The demonstrations across the country last weekend that disrupted the parades commemorating the Greek resistance in World War II culminated with the forced departure of the President of the Republic, Karolos Papoulias, from the parade in Thessaloniki. The current political system has been facing a legitimacy crisis for a while now. The social contract, based on patronage, established between Greek politicians and the electorate following the fall of the Greek Junta in 1974 is under severe strain.
The second problem facing Papandreou is the dissent and distrust he is experiencing from his own party, which—for the moment—holds a bare majority of 152 seats out of 300 in the Greek Parliament. This past summer in a cabinet reshuffling, Papandreou tried to smooth out the problems in his party by appointing his main internal rival, Evangelos Venizelos, Vice President. But this accommodation reached its breaking point yesterday when Venizelos declared he had not been informed about the referendum by Papandreou, who nevertheless called on him to deliver the bad news to EU leaders. Meanwhile, the opposition parties claim that the government is blackmailing the Greek people and suggest that the only solution is to have early elections.
The crisis of legitimacy reached its peak yesterday when rumors about tensions between the government and the military leadership of the country became credible when the minister of Defense called for the replacement of all the heads of divisions of the armed forces. It would be a controversial decision in the best of times, but one that’s nearly impossible to carry out for a government facing unprecedented unpopularity.
The European Union leaders are dead against three outcomes: the collapse of the Greek parliament, the ouster of Papandreou on Friday, and the negative result of any kind of referendum on the bailout—all of which would ultimately spell the ejection of Greece from the Eurozone and spur financial chaos on the continent. The solution must come from Europe. The meeting at Cannes Thursday—where Papandreou has been invited by Merkel and Sarkozy—is his last chance to appease his European patrons.
The real question is not whether Greece will proceed or not with the referendum, but rather who controls Europe? Is it the Germans who seem to be the only ones who can undo the European Central Bank policy about printing money? The French and the Germans together who want to keep the Euro strong? Is it the speculators, banks and their interests? Or is the EU open to more democratic control whereby the voters can have a voice?
Whatever the outcome, Greece is now up against the wall thanks to Papandreou. The predicament has suddenly changed from a financial catastrophe and austerity measures to a question about political identity: Do Greeks belong in the European Union or not?
Co-author Thomas Meaney is a doctoral candidate in history at Columbia University and an editor of The Utopian.
In this timely examination of children of immigrants in New York and London, Natasha Kumar Warikoo asks, Is there a link between rap/hip-hop-influenced youth culture and motivation to succeed in school? Warikoo challenges teachers, administrators, and parents to look beneath the outward manifestations of youth culture -- the clothing, music, and tough talk -- to better understand the internal struggle faced by many minority students as they try to fit in with peers while working to lay the groundwork for successful lives. Using ethnographic, survey, and interview data in two racially diverse, low-achieving high schools, Warikoo analyzes seemingly oppositional styles, tastes in music, and school behaviors and finds that most teens try to find a balance between success with peers and success in school.
The rugged Sanriku Coast of northeastern Japan is among the most
beautiful places in the country. The white stone islands outside the
port town of Miyako are magnificent. The Buddhist monk Reikyo could
think of nothing but paradise when he first saw them in the 17th
century. “It is the shore of the pure land,” he is said to have uttered
in wonder, citing the common name for nirvana.
Reikyo’s name for the place stuck. Jodogahama, or Pure Land Beach, is
the main gateway to the Rikuchu Kaigan National Park, a crenellated
seashore of spectacular rock pillars, sheer cliffs, deep inlets and
narrow river valleys that covers 100 miles of rural coastline. It is a
region much like Down East Maine, full of small, tight-knit communities
of hardworking people who earn their livelihoods from tourism and
fishing. Sushi chefs around the country prize Sanriku abalone,
cuttlefish and sea urchin.
Today that coast is at the center of one of the worst disasters in
Japanese history. Despite the investment of billions of yen in disaster
mitigation technology and the institution of robust building codes,
entire villages have been swept out to sea. In some places little
remains but piles of anonymous debris and concrete foundations.
I taught school in Miyako for more than two years in the 1990s, and it
was while hiking in the mountains above one of those picturesque fishing
villages that I came across my first material reminder of the intricate
relationship between the area’s breathtaking geography, its people —
generous and direct — and powerful seismic forces.
On a hot summer day a group of middle-school boys set out to introduce
me to their town, a hamlet just north of Pure Land Beach. While I
started up the steep mountainside the children bounced ahead of me,
teasing me that I moved slowly for someone so tall. “Are you as tall as
Michael Jordan, Miller-sensei?” yelled one boy as he shot past me up the
“Not quite,” I told him, pausing on a spot of level ground to look out
over the neat collection of tile roofs and gardens that filled the back
of a narrow, high-walled bay.
“What is this?” I asked, pointing to a mossy stone marker that occupied
the rest of the brief plateau. A chorus of young voices told me that it
was the high-water mark for the area’s biggest tsunami: more than 50
feet above the valley floor.
“When was that?” I asked, but the boys couldn’t say. They had learned
about it in school, they said, but like children everywhere they had
little sense of time. Everything seemed like ancient history to them,
but the thought of a wave reaching so high over the homes of my friends
sent a chill down my spine, and I began to investigate the region’s
A major tsunami has hit the Sanriku Coast every few decades over the
last century and a half. Waves swept the area in 1896, 1933 and 1960.
The small monument was put there, high above the village, to mark the
crest of the 1896 tsunami. The wave killed more than 20,000 people. The
boys’ village, a place called Taro, was almost entirely destroyed.
Seventy-five percent of the population died.
The force of those waves was amplified by the area’s distinctive
geography. The same steep valley walls and deep inlets that make Sanriku
so beautiful also make its villages and towns especially hazardous. The
valleys channel a tsunami’s energy, pushing swells that are only a few
feet high in the open ocean up to stunning heights. Fast-moving water
topped 120 feet in one village in 1896.
In a landscape where earthquakes are a regular occurrence but major
tsunamis happen irregularly, people naturally forget. The small monument— one of several commissioned for towns up and down the coast — was a
mnemonic whose purpose was not commemoration but vigilance. “When there
is an earthquake, watch for tsunami,” reads the rather practical poem
engraved into one such slab.
Japan became a modern industrial state between the 1896 tsunami and the
next major one, in 1933. The country’s radio and newspapers brought the
story of rural fisher-folk swept out to sea to metropolitan audiences.
Three thousand people died in the disaster and the humanitarian crisis
elicited strong feelings of sympathy. The Sanriku region was portrayed
as the nation’s heartland, a place where tradition remained intact, and
the disaster threatened that preserve. Once again, Taro was particularly
hard hit: all but eight of its homes were destroyed and nearly half of
the village’s population of 1,800 souls went missing. The hamlet became
an embodiment of agrarian loss.
It is paradoxical that the response to this threat to traditional ways
was the application of cutting-edge engineering and technology. A huge
concrete seawall was planned for Taro. Completed in 1958, that wall, 30
feet high at points, stretches over 1.5 miles across the base of the
Faith in technology over nature appeared to be vindicated in 1960 with
the great Chilean earthquake, a 9.5-magnitude quake that remains the
largest ever recorded, which set off a Pacific-wide tsunami that killed
61 people in Hilo, Hawaii, before surging unannounced into the Sanriku
Coast seven hours later. More than 120 Japanese died, but Taro remained
largely unaffected, safe behind its sluice gates and concrete wall.
Based in part on this success, a new program of coastal defense was
The Sanriku Coast is now one of the most engineered rural coastlines in
the world. Its towns, villages and ports take shelter behind
state-of-the-art seawalls and vast assemblages of concrete tetrapods
designed to dissipate a wave’s energy. The region is home to one of the
world’s best emergency broadcast systems and has been at the forefront
of so-called “vertical evacuation” plans, building tall, quake-resistant
structures in low-lying areas.
In 2003 Taro announced that it would become a “tsunami preparedness
town.” Working with teams from the University of Tokyo and Iwate
University, the town instituted a direct satellite link to accelerate
the arrival of tsunami warnings. Public education was expanded and
mayors from other towns visited to study this model village. Detailed
maps showing projected maximum tsunami heights — using 1896 as a
baseline — informed the selection of evacuation markers: a reassuring
thick line defined the projected maximum reach of a tsunami. Evacuation
sites were placed above that line on the maps. Similar calculations were
made up and down the coast.
The lines were drawn in the wrong place. Despite the substantial
infrastructure and technological investments in Sanriku, the wave on
March 11 overwhelmed large portions of Taro and Miyako. Some of the
evacuation points were not high enough. The walls were not tall enough.
And the costs are still being tallied.
Thousands of people are missing along this beautiful, injured coast,
hundreds in the town that I called home. I am still waiting to hear from
one of the groomsmen from my wedding, the owner of Miyako’s best coffee
shop and a sometime reader of this newspaper. Google’s people-finder
app tells me he is alive, but I have no idea where he is or how our
other friends fared. As for those rambunctious boys and all of my other
students, I can only hope for the best.
Technology allowed me to learn my friend’s fate. It has also helped to
inspire a worldwide humanitarian response. It may be, however, that a
greater application of technology in the same direction is not the
answer to the problems posed by the March 11 tsunami. As a historian, I
am forced to recognize that there is nothing purely natural about this
catastrophe. It is the result of a far longer negotiation between human
culture and physical forces. Disasters have the counterintuitive
tendency to reinforce the status quo. As the terrifying events at the
Fukushima Daiichi nuclear plant continue to underline, there are very
real costs to an uncritical application of technology.
I look forward to returning to my old Japanese home, but I also look
forward to finding something new and different when I make that journey.
The South African Chinese have long labored to manipulate their racial position to advance their individual and collective economic and political interests. Their negotiation reached its peak under apartheid, the oppressive system of segregation instituted by the National Party in 1948. Under various concurrent tenets of apartheid law, the Chinese were classified as non-white, Coloured, Asian, and Chinese. Like other non-white groups, the Chinese were subject to discrimination because of their race. Yet over the course of apartheid, the Chinese slowly gained more rights. By the late 1970s, they were still Chinese but had won many of the privileges reserved for Whites. The Chinese population managed this success through their small size and specific political strategies intended to portray their community as diligent, law-abiding citizens. Instead of protesting the existing social order, they sought to manipulate the apartheid apparatus to their advantage. Ultimately, the South African Chinese managed to manipulate racial policies to their advantage because of the apartheid state’s overarching concerns about its political and economic relations with the Republic of China’s government. Chinese South Africans represent a miniscule fraction of South Africa’s population and have received a commensurately small amount of historiographic attention. However, their experiences offer a privileged vantage point into the connections between South Africa’s domestic racial policies and international relations during the apartheid years. Ultimately, this study demonstrates that the international context deeply shaped the construction and reconstruction of racial and ethnic categories in apartheid South Africa—a regime too often dismissed as exceptional and divorced from a changing international order. This work not only engages the literature on the experiences of the South African Chinese, but also provides a critical case study for the larger literature on the functional utility of race in the policy formation of apartheid.
After years when young Americans yearned only to be occupied on Wall Street, suddenly they have taken to occupying it. It’s easy to scoff at this phenomenon. I know, because I have.
This is certainly not America’s answer to the Arab Spring—the Bobo Fall perhaps, unmistakably both bohemian and bourgeois. But it’s still worth taking seriously. What is it that makes evidently educated young people yearn to adopt leftist positions that are eerily reminiscent of the ones their parents adopted in 1968?
Check out the protesters’ website, which on Monday featured a speech by Slovenian critical theorist Slavoj Žižek. At first I thought this must be some kind of parody, but no, he really exists—red T-shirt, Krugman beard, and all: “The only sense in which we are communists is that we care for the commons. The commons of nature. The commons of what is privatized by intellectual property. The commons of biogenetics. For this and only for this we should fight.”
Yeah, man. Property is theft. Ne travaillez jamais. And all that.
There are three possible explanations for this retrogression to the language of ’68. 1. Increasing inequality exemplified by Wall Street is worth protesting against.2. So is the fact that only a handful of bankers have been prosecuted for their part in the financial crisis.3. Demonstrating is way cool.
Yet if I were a young American today, occupying Wall St. would not be my objective. Just reflect for a minute on the unbridled economic mayhem that would ensue if the protesters actually succeeded. The headline “Goldman Sachs Under Control of Hip Teenage Revolutionaries” would be the last straw for an already fragile economic recovery.
Now ask yourself what the financial crisis really means for today's 15- to 24-year-olds. Not only has it raised the probability that they will be unemployed after graduation. More seriously, it has massively increased the debt that they will have to service when they do get jobs.
Never in the history of intergenerational transfers has one generation left such a mountain of IOUs to another as the baby boomers are leaving to their grandchildren.
When you do the math, there is only one logical political home for today’s teens and 20-somethings ... and that is the Tea Party. For who else is promising to slash Medicare and Social Security and keep the tax burden at its historical average?
Let’s just remind ourselves of the report of the Trustees of the Social Security and Medicare trust funds back in 2007, which projected a rise in the cost of these two programs from 7.3 percent of gross domestic product to 17.5 percent by 2030. The trustees warned that to achieve actuarial balance—in other words, solvency—for these two programs would require (for Social Security) an increase of 16 percent in payroll tax revenues or an immediate reduction in benefits of 13 percent. For Medicare we are talking a 122 percent increase in payroll taxes or a 51 percent cut in spending.
As Laurence Kotlikoff and Scott Burns pointed out in The Coming Generational Storm, by 2030 there will be twice as many retirees as there are today but only 18 percent more workers. Unless there is really radical reform of entitlement programs - especially Medicare - the next generation of American workers will be paying roughly double the taxes their parents and grandparents paid. This is what Kotlikoff and Burns mean by “fiscal child abuse.”
Of these harsh realities the occupiers of Wall Street seem blissfully unaware. Fixated on the idea that they somehow represent the 99 percent of people who scrape by on 80 percent of total income, they fail to see that the real distributional conflict of our time is not between percentiles, much less classes, but between generations. And no generation has a keener interest in slashing future spending on entitlements than today’s teens and 20-somethings.
So occupying Wall Street is not the answer to this generation’s problems. The answer is to occupy the Tea Party—and wrest it from the grumpy old men who currently run it.
Call it the Iced Tea Party.
Between 1876 and 1945, thousands of Japanese civilians—merchants, traders, prostitutes, journalists, teachers, and adventurers—left their homeland for a new life on the Korean peninsula. Although most migrants were guided primarily by personal profit and only secondarily by national interest, their mundane lives and the state’s ambitions were inextricably entwined in the rise of imperial Japan. Despite having formed one of the largest colonial communities in the twentieth century, these settlers and their empire-building activities have all but vanished from the public memory of Japan’s presence in Korea.Drawing on previously unused materials in multi-language archives, Jun Uchida looks behind the official organs of state and military control to focus on the obscured history of these settlers, especially the first generation of “pioneers” between the 1910s and 1930s who actively mediated the colonial management of Korea as its grassroots movers and shakers. By uncovering the downplayed but dynamic role played by settler leaders who operated among multiple parties—between the settler community and the Government-General, between Japanese colonizer and Korean colonized, between colony and metropole—this study examines how these “brokers of empire” advanced their commercial and political interests while contributing to the expansionist project of imperial Japan.