Two acclaimed political economists explore the origins and long-term effects of the financial crisis in historical and comparative perspective.
Welcome to Argentina: by 2008 the United States had become the biggest international borrower in world history, with almost half of its 6.4 trillion dollar federal debt in foreign hands. The proportion of foreign loans to the size of the economy put the United States in league with Mexico, Pakistan, and other third-world debtor nations. The massive inflow of foreign funds financed the booms in housing prices and consumer spending that fueled the economy until the collapse of late 2008.
The authors explore the political and economic roots of this crisis as well as its long-term effects. They explain the political strategies behind the Bush administration's policy of funding massive deficits with the foreign borrowing that fed the crisis. They see the continuing impact of our huge debt in a slow recovery ahead. Their clear, insightful, and comprehensive account will long be regarded as the standard on the crisis.
Does an expansion of health insurance increase or decrease use of the emergency department (ED)? Both predictions can be justified logically. On the one hand, research on patient cost sharing predicts that by reducing the out-of-pocket costs of an ED visit, expanded insurance coverage, especially in the face of physician shortages, could result in increased ED utilization. This view has been echoed by elected leaders: Senator Jon Kyl (R-AZ), citing the Massachusetts experience with health care reform, claimed that if anything, universal coverage brought even higher rates of emergency room visits due to increased difficulty in getting appointments for outpatient physician visits. Others have predicted that expanded coverage would actually reduce ED use, since previously uninsured patients would now have access to preventive care. The relative importance of these countervailing forces is a question that clearly weighs on physicians: in a survey of emergency physicians conducted in April 2010, about 71 percent said they expected emergency visits to increase after the passage of the Affordable Care Act (ACA). To explore the importance of these effects, we examined the Massachusetts experience. The state's 2006 health care reform was a model for the ACA and reduced the proportion of Massachusetts adults under the age of 65 who were uninsured by 7.7 percentage points between the fall of 2006 and the fall of 2009. To determine whether any changes in ED utilization in Massachusetts reflected the effect of Massachusetts' reform or were merely representative of broader regional trends in ED utilization, we used New Hampshire and Vermont as control states.
This book applies an established analytical framework for health sector reform (Getting Health Reform Right, Oxford, 2004) to the performance problems of the pharmaceutical sector. The book is divided into three sections. The first section presents the basic ideas for analysis. It begins by insisting that reform start with a clear understanding of the performance deficiencies of the current system. Like all priority setting in the public sector, this 'definition of the problem' involves both ethical choices and political processes. Early chapters explain the foundations of these ideas and apply them to the pharmaceutical sector. The relationship of ultimate outcomes (like health status or risk protection) to classic health systems concepts like efficiency, access and quality is also explored. The last chapter in the first part is devoted to 'diagnosis'—explaining how to move from the definition of a problem to an understanding of how the functioning of the system produces the undesirable outcomes in question.
The second part of the book devotes one chapter to each of five 'control knobs': finance, payment, organization, regulation and persuasion. These are sets of potential interventions that governments can use to improve pharmaceutical sector performance. Each chapter presents basic concepts and discusses examples of reform options. Throughout we provide 'conditional guidance'—avoiding the approach of a 'one size fits all' model of 'best practices' in these five arenas for reform. Instead we stress the need for local knowledge of political systems, administrative capacities, community values and market conditions in order to design pharmaceutical sector policies appropriate to a country’s particular circumstances.
The last part of the book is a set of teaching cases. Each is preceded by questions and is followed by a brief note on the lessons to be learned. The goal is to help readers develop the skills they need to deal effectively with pharmaceutical sector reform problems in their own countries.
The good news is that today’s teenagers are avid readers and prolific writers. The bad news is that what they are reading and writing are text messages.
According to a survey carried out last year by Nielsen, Americans between the ages of 13 and 17 send and receive an average of 3,339 texts per month. Teenage girls send and receive more than 4,000.
It’s an unmissable trend. Even if you don’t have teenage kids, you’ll see other people’s offspring slouching around, eyes averted, tapping away, oblivious to their surroundings. Take a group of teenagers to see the seven wonders of the world. They’ll be texting all the way. Show a teenager Botticelli’s Adoration of the Magi. You might get a cursory glance before a buzz signals the arrival of the latest SMS. Seconds before the earth is hit by a gigantic asteroid or engulfed by a super tsunami, millions of lithe young fingers will be typing the human race’s last inane words to itself:
C u later NOT :(
Now, before I am accused of throwing stones in a glass house, let me confess. I probably send about 50 emails a day, and I receive what seem like 200. But there’s a difference. I also read books. It’s a quaint old habit I picked up as a kid, in the days before cellphones began nesting, cuckoolike, in the palms of the young.
Half of today’s teenagers don’t read books—except when they’re made to. According to the most recent survey by the National Endowment for the Arts, the proportion of Americans between the ages of 18 and 24 who read a book not required at school or at work is now 50.7 percent, the lowest for any adult age group younger than 75, and down from 59 percent 20 years ago.
Back in 2004, when the NEA last looked at younger readers’ habits, it was already the case that fewer than one in three 13-year-olds read for pleasure every day. Especially terrifying to me as a professor is the fact that two thirds of college freshmen read for pleasure for less than an hour per week. A third of seniors don’t read for pleasure at all.
Why does this matter? For two reasons. First, we are falling behind more-literate societies. According to the results of the Organization for Economic Cooperation and Development’s most recent Program for International Student Assessment, the gap in reading ability between the 15-year-olds in the Shanghai district of China and those in the United States is now as big as the gap between the U.S. and Serbia or Chile.
But the more important reason is that children who don’t read are cut off from the civilization of their ancestors.
So take a look at your bookshelves. Do you have all - better make that any - of the books on the Columbia University undergraduate core curriculum? It’s not perfect, but it’s as good a list of the canon of Western civilization as I know of. Let’s take the 11 books on the syllabus for the spring 2012 semester: (1) Virgil’s Aeneid; (2) Ovid’s Metamorphoses; (3) Saint Augustine’s Confessions; (4) Dante’s The Divine Comedy; (5) Montaigne’s Essays; (6) Shakespeare’s King Lear; (7) Cervantes’s Don Quixote; (8) Goethe’s Faust; (9) Austen’s Pride and Prejudice; (10) Dostoevsky’s Crime and Punishment; (11) Woolf’s To the Lighthouse.
Step one: Order the ones you haven’t got today. (And get War and Peace, Great Expectations, and Moby-Dick while you’re at it.)
Step two: When vacation time comes around, tell the teenagers in your life you are taking them to a party. Or to camp. They won’t resist.
Step three: Drive to a remote rural location where there is no cell-phone reception whatsoever.
Step four: Reveal that this is in fact a reading party and that for the next two weeks reading is all you are proposing to do—apart from eating, sleeping, and talking about the books.
Welcome to Book Camp, kids.
America's last 10 years might be called “The Decade the Locusts Ate.’’ A nation that started with a credible claim to lead a second American century lost its way after the terrorist attacks of September 11, 2001. Whether the nation will continue on a path of decline, or, alternatively, find our way to recovery and renewal, is uncertain.
The nation began the decade with a growing fiscal surplus and ended with a deficit so uncontrolled that its AAA credit rating was downgraded for the first time in its history. Ten years on, Americans’ confidence in our country and the promise of the American Dream is lower than at any point in memory. The indispensable superpower that entered the decade as the most respected nation in the world has seen its standing plummet. Seven out of every 10 Americans say that the United States is worse off today than it was a decade ago. While many of the factors that contributed to these developments were evident before 9/11, this unprecedented reversal pivots on that tragic day - and the choices made in response to it. Those choices had costs: the inescapable costs of the attack, the chosen costs, and the opportunity costs.
Inescapable costs of 9/11 must be counted first in the 3,000 innocent lives extinguished that morning. In addition, the collapse of the World Trade Center and part of the Pentagon destroyed $30 billion of property. The Dow plunged, erasing $1.2 trillion in value. Psychologically, the assault punctured the “security bubble’’ in which most Americans imagined they lived securely. Today, 80 percent of Americans expect another major terrorist attack on the homeland in the next decade.
Were this the sum of the matter, 9/11 would stand as a day of infamy, but not as an historic turning point. Huge as these directs costs are, they pale in comparison to costs of choices the United States made in response to 9/11: about how to defend America; where to fight Al Qaeda; whether to attack Iraq (or Iran or North Korea) on grounds that they had chemical or biological weapons that could be transferred to Al Qaeda; and whether to pay for these choices by taxing the current generation, or borrowing from China and other lenders, leaving the bills to the next generation.
Unquestionably, much of what was done to protect citizens at home and to fight Al Qaeda abroad has made America safer. It is no accident that the United States has not suffered further megaterrorist attacks. The remarkable intelligence and Special Forces capabilities demonstrated in the operation that killed Osama bin Laden suggest how far we have come.
But the central storyline of the decade focuses on two choices made by President George W. Bush - his decision to go to war with Iraq and his commitment to cut taxes, especially for wealthy Americans, and thus not to pay for the wars in Iraq and Afghanistan.
The cost of his decision to go to war with Iraq is measured in 4,478 American deaths, 40,000 Americans gravely wounded, and a monetary cost of $2 trillion.
Bush justified his decision to attack Iraq on the grounds that Saddam Hussein might arm terrorists with weapons of mass destruction, arguing that “19 hijackers armed by Saddam Hussein…could bring a day of horror like none we have ever known.’’ In retrospect, even Bush supporters agree that we went to war on false premises—since we now know that Saddam had no chemical or biological weapons.
Suppose, however, that chemical weapons had been found in Iraq. Would that have made Bush’s choice a wise decision? What about the many other states that had chemical or biological weapons that could have been transferred to Al Qaeda, for example Libya, or Syria, or Iran? What about the state that unquestionably had an advanced nuclear weapons program, North Korea, which took advantage of the US preoccupation with Iraq to develop an arsenal of nuclear weapons and conduct its first nuclear weapons test?
As for cutting taxes for the wealthy, Bush’s decision left the nation with a widening gap between government revenues and its expenditures. Brute facts are hard to ignore: having entered office with a budgetary surplus that the CBO projected would total $3.5 trillion through 2008, Bush left office with an annual deficit of over $1 trillion that the CBO projected would grow to $3 trillion over the next decade.
Finally, and most difficult to assess, are opportunity costs, what could be Robert Frost’s “road not taken.’’ In the immediate aftermath of 9/11, the United States was the object of overwhelming international sympathy and solidarity. The leading French newspaper declared: “We are all Americans.’’ Citizens united behind their commander in chief, giving him license to do virtually anything he could plausibly argue would defend us against future attacks.
This rare combination of readiness to sacrifice at home plus solidarity abroad sparked imagination. Would Americans have willingly paid a “terrorist tax’’ on gas that could kick what Bush rightly called America’s “oil addiction’’? Could an international campaign against nuclear terrorism or megaterrorism have bent trend lines that leave Americans and the world increasingly vulnerable to future biological or nuclear terrorist attacks? What impact could $2 trillion invested in new technologies have had on American competitiveness?
That such a decade leaves Americans increasingly pessimistic about ourselves and our future is not surprising. American history, however, is a story of recurring, impending catastrophes from which there is no apparent escape—followed by miraculous recoveries. At one of our darkest hours in 1776 when defeat at the hands of the British occupying Boston seemed almost certain, the general commanding American forces, George Washington, observed: “Perseverance and spirit have done wonders in all ages.’’
The United States is in the third year of a grand experiment by the Obama administration to revive the economy through enormous borrowing and spending by the government, with the Federal Reserve playing a supporting role by keeping interest rates at record lows.
How is the experiment going? By the looks of it, not well.
The economy is growing much more slowly than in a typical recovery, housing prices remain depressed and the stock market has been in a slump—all troubling indicators that another recession may be on the way. Most worrisome is the anemic state of the labor market, underscored by the zero growth in the latest jobs report.
The poor results should not surprise us given the macroeconomic policies the government has pursued. I agree that the recession warranted fiscal deficits in 2008–2010, but the vast increase of public debt since 2007 and the uncertainty about the country’s long-run fiscal path mean that we no longer have the luxury of combating the weak economy with more deficits.
Today’s priority has to be austerity, not stimulus, and it will not work to announce a new $450 billion jobs plan while promising vaguely to pay for it with fiscal restraint over the next 10 years, as Mr. Obama did in his address to Congress on Thursday. Given the low level of government credibility, fiscal discipline has to start now to be taken seriously. But we have to do even more: I propose a consumption tax, an idea that offends many conservatives, and elimination of the corporate income tax, a proposal that outrages many liberals.
These difficult steps would be far more effective than the president’s failed experiment. The administration’s $800 billion stimulus program raised government demand for goods and services and was also intended to stimulate consumer demand. These interventions are usually described as Keynesian, but as John Maynard Keynes understood in his 1936 masterwork, “The General Theory of Employment, Interest and Money” (the first economics book I read), the main driver of business cycles is investment. As is typical, the main decline in G.D.P. during the recession showed up in the form of reduced investment by businesses and households.
What drives investment? Stable expectations of a sound economic environment, including the long-run path of tax rates, regulations and so on. And employment is akin to investment in that hiring decisions take into account the long-run economic climate.
The lesson is that effective incentives for investment and employment require permanence and transparency. Measures that are transient or uncertain will be ineffective.
And yet these are precisely the kinds of policies the Obama administration has pursued: temporarily cutting the payroll tax rate, maintaining the marginal income-tax rates from the George W. Bush era while vowing to raise them in the future, holding off on clean-air regulations while promising to implement them later and enacting an ambitious overhaul of Wall Street regulations while leaving lots of rules undefined and ambiguous.
Is there a better way? I believe that a long-term fiscal plan for the country requires six big steps.
Three of them were identified by the Bowles-Simpson deficit reduction commission: reforming Social Security and Medicare by increasing ages of eligibility and shifting to an appropriate formula for indexing benefits to inflation; phasing out “tax expenditures” like the deductions for mortgage interest, state and local taxes and employer-provided health care; and lowering the marginal income-tax rates for individuals.
I would add three more: reversing the vast and unwise increase in spending that occurred under Presidents Bush and Obama; introducing a tax on consumer spending, like the value-added tax (or VAT) common in other rich countries; and abolishing federal corporate taxes and estate taxes. All three measures would be enormously difficult—many say impossible—but crises are opportune times for these important, basic reforms.
A broad-based expenditure tax, like a VAT, amounts to a tax on consumption. If the base rate were 10 percent, the revenue would be roughly 5 percent of G.D.P. One benefit from a VAT is that it is more efficient than an income tax—and in particular the current American income tax system.
I received vigorous criticism from conservatives after advocating a VAT in an essay in The Wall Street Journal last month. The main objection—reminiscent of the complaints about income-tax withholding, which was introduced in the United States in 1943—is that a VAT would be a money machine, allowing the government to readily grow larger. For example, the availability of easy VAT revenue in Western Europe, where rates reach as high as 25 percent, has supported the vast increase in the welfare state there since World War II. I share these concerns and, therefore, favor a VAT only if it is part of a package that includes other sensible reforms. But given the likely path of government spending on health care and Social Security, I see no reasonable alternative.
Abolishing the corporate income tax is similarly controversial. Any tax on capital income distorts decisions on saving and investment. Moreover, the inefficiency is magnified here because of double taxation: the income is taxed when corporations make profits and again when owners receive dividends or capital gains. If we want to tax capital income, a preferred method treats corporate profits as accruing to owners when profits arise and then taxes this income only once—whether it is paid out as dividends or retained by companies.
Liberals love the idea of a levy on evil corporations, but taxes on corporate profits in fact make up only a small part of federal revenue, compared to the two main sources: the individual income tax and payroll taxes for Social Security and Medicare.
In 2009-10, taxes on corporate profits averaged 1.4 percent of G.D.P. and 8.6 percent of total federal receipts. Even from 2000 to 2008, when corporations were more profitable, these taxes averaged only 1.9 percent of G.D.P. and 10.3 percent of federal receipts. If we could get past the political fallout, we could get more revenue and improve economic efficiency by abolishing the corporate income tax and relying instead on a VAT.
I had a dream that Mr. Obama and Congress enacted this fiscal reform package—triggering a surge in the stock market and a boom in investment and G.D.P.—and that he was re-elected.
This dream could become reality if our leader were Ronald Reagan or Bill Clinton—the two presidential heroes of the American economy since World War II—but Mr. Obama is another story. To become market-friendly, he would have to abandon most of his core economic and political principles.
More likely, his administration will continue with more of the same: an expansion of payroll-tax cuts, short-term tax credits, promises to raise future taxes on the rich, and added spending on infrastructure, job training and unemployment benefits. The economy will probably continue in its sluggish state, possibly slipping into another recession. In that case, our best hope is for a Republican president far more committed to the principles of free markets and limited government than Mr. Bush ever was.
America's last 10 years might be called “The Decade the Locusts Ate.’’ A nation
that started with a credible claim to lead a second American century
lost its way after the terrorist attacks of Sept. 11, 2001. Whether the
nation will continue on a path of decline, or, alternatively, find our
way to recovery and renewal, is uncertain.The nation began the
decade with a growing fiscal surplus and ended with a deficit so
uncontrolled that its AAA credit rating was downgraded for the first
time in its history. Ten years on, Americans’ confidence in our country
and the promise of the American Dream is lower than at any point in
memory. The indispensable superpower that entered the decade as the most
respected nation in the world has seen its standing plummet. Seven out
of every 10 Americans say that the United States is worse off today than
it was a decade ago. While many of the factors that contributed to
these developments were evident before 9/11, this unprecedented reversal
pivots on that tragic day - and the choices made in response to it.
Those choices had costs: the inescapable costs of the attack, the chosen
costs, and the opportunity costs.Inescapable
costs of 9/11 must be counted first in the 3,000 innocent lives
extinguished that morning. In addition, the collapse of the World Trade
Center and part of the Pentagon destroyed $30 billion of property. The
Dow plunged, erasing $1.2 trillion in value. Psychologically, the
assault punctured the “security bubble’’ in which most Americans
imagined they lived securely. Today, 80 percent of Americans expect
another major terrorist attack on the homeland in the next decade.Were
this the sum of the matter, 9/11 would stand as a day of infamy, but
not as an historic turning point. Huge as these directs costs are, they
pale in comparison to costs of choices the United States made in
response to 9/11: about how to defend America; where to fight Al Qaeda;
whether to attack Iraq (or Iran or North Korea) on grounds that they had
chemical or biological weapons that could be transferred to Al Qaeda;
and whether to pay for these choices by taxing the current generation,
or borrowing from China and other lenders, leaving the bills to the next
generation.Unquestionably, much of what was done to protect
citizens at home and to fight Al Qaeda abroad has made America safer. It
is no accident that the United States has not suffered further
megaterrorist attacks. The remarkable intelligence and Special Forces
capabilities demonstrated in the operation that killed Osama bin Laden
suggest how far we have come.But
the central storyline of the decade focuses on two choices made
by President George W. Bush—his decision to go to war with Iraq and his
commitment to cut taxes, especially for wealthy Americans, and thus not
to pay for the wars in Iraq and Afghanistan.The cost of his
decision to go to war with Iraq is measured in 4,478 American deaths,
40,000 Americans gravely wounded, and a monetary cost of $2 trillion.Bush
justified his decision to attack Iraq on the grounds that Saddam
Hussein might arm terrorists with weapons of mass destruction, arguing
that “19 hijackers armed by Saddam Hussein… could bring a day of horror
like none we have ever known.’’ In retrospect, even Bush supporters
agree that we went to war on false premises—since we now know that
Saddam had no chemical or biological weapons.Suppose,
however, that chemical weapons had been found in Iraq. Would that have
made Bush’s choice a wise decision? What about the many other states
that had chemical or biological weapons that could have been transferred
to Al Qaeda, for example Libya, or Syria, or Iran? What about the state
that unquestionably had an advanced nuclear weapons program, North
Korea, which took advantage of the US preoccupation with Iraq to develop
an arsenal of nuclear weapons and conduct its first nuclear weapons
test?As for cutting taxes for the wealthy, Bush’s decision left
the nation with a widening gap between government revenues and its
expenditures. Brute facts are hard to ignore: having entered office with
a budgetary surplus that the CBO projected would total $3.5 trillion
through 2008, Bush left office with an annual deficit of over $1
trillion that the CBO projected would grow to $3 trillion over the next
decade.Finally, and most difficult to assess, are opportunity
costs, what could be Robert Frost’s “road not taken.’’ In the immediate
aftermath of 9/11, the United States was the object of overwhelming
international sympathy and solidarity. The leading French newspaper
declared: “We are all Americans.’’ Citizens united behind their
commander in chief, giving him license to do virtually anything he could
plausibly argue would defend us against future attacks.This rare
combination of readiness to sacrifice at home plus solidarity abroad
sparked imagination. Would Americans have willingly paid a “terrorist
tax’’ on gas that could kick what Bush rightly called America’s “oil
addiction’’? Could an international campaign against nuclear terrorism
or megaterrorism have bent trend lines that leave Americans and the
world increasingly vulnerable to future biological or nuclear terrorist
attacks? What impact could $2 trillion invested in new technologies have
had on American competitiveness?That such a decade leaves
Americans increasingly pessimistic about ourselves and our future is not
surprising. American history, however, is a story of recurring,
impending catastrophes from which there is no apparent escape—followed
by miraculous recoveries. At one of our darkest hours in 1776 when
defeat at the hands of the British occupying Boston seemed almost
certain, the general commanding American forces, George Washington,
observed: “Perseverance and spirit have done wonders in all ages.’’
While most existing theoretical and experimental literatures focus on how a high probability of repeated play can lead to more socially efficient outcomes (for instance, using the result that cooperation is possible in a repeated prisoner’s dilemma), this paper focuses on the detrimental effects of repeated play—the ‘‘dark side of the future.’’ I study a resource division model with repeated interaction and changes in bargaining strength. The model predicts a negative relationship between the likelihood of repeated interaction and social efficiency. This is because the longer shadow of the future exacerbates commitment problems created by changes in bargaining strength. I test and find support for the model using incentivized laboratory experiments. Increases in the likelihood of repeated play lead to more socially inefficient outcomes in the laboratory.
In this Series in The Lancet, we review the past 50 years of Japan’s universal health coverage, identify the major challenges of today, and propose paths for the future, within the context of long-term population aging and the devastating crises triggered by the March 11 earthquake. Japan is recognised internationally for its outstanding achievements during the second half of the 20th century, in both improving the population’s health status and developing a strong health system. At the end of World War 2, in Japan, life expectancy at birth was 50 years for men and 54 years for women; by the late 1970s, Japan overtook Sweden as the world’s leader for longest life expectancy at birth. Japanese women have remained in the number one slot for 25 years, reaching a life expectancy of 86.4 years in 2009 (while Japanese men slipped to fifth longest living that year, at 79.6 years).In 2011, Japan celebrates 50 years of kaihoken: health insurance for all. Universal health insurance was achieved in 1961, assuring access to a wide array of health services for the whole population. Since then, benefits have become more egalitarian while health expenditures have remained comparatively low: 8.5% of the gross domestic product and 20th out of countries in the Organisation for Economic Co-operation and Development in 2008. This achievement is all the more remarkable because the percentage of the population aged 65 years or older has increased nearly four-fold (from 6% to 23%) over the past 50 years.
Using the most comprehensive data file ever compiled on air pollution, water pollution, environmental regulations, and infant mortality from a developing country, the paper examines the effectiveness of India’s environmental regulations. The air pollution regulations were effective at reducing ambient concentrations of particulate matter, sulfur dioxide, and nitrogen dioxide. The most successful air pollution regulation is associated with a modest and statistically insignificant decline in infant mortality. However, the water pollution regulations had no observable effect. Overall, these results contradict the conventional wisdom that environmental quality is a deterministic function of income and underscore the role of institutions and politics.
President Obama should take a page from Ronald Reagan’s playbook in winning the final inning of the Cold War. Obama can challenge President Mahmoud Ahmadinejad to put his enriched uranium where his mouth is—by stopping all Iranian enrichment of uranium beyond the 5 percent level.
A quarter-century ago, Soviet leader Mikhail Gorbachev was touting a new “glasnost”: openness. President Reagan went to Berlin and called on Gorbachev to “tear down this wall.” Two years later, the Berlin Wall came tumbling down and, shortly thereafter, the Soviet “evil empire” fell as well.
While in New York for the opening of the UN General Assembly in September, Ahmadinejad on three occasions made an unambiguous offer: He said Iran would stop all enrichment of uranium beyond the levels used in civilian power plants—if his country is able to buy specialized fuel enriched at 20 percent, for use in its research reactor that produces medical isotopes to treat cancer patients.
Obama should seize this proposal and send negotiators straightaway to hammer out specifics. Iran has been enriching uranium since 2006, and it has accumulated a stockpile of uranium enriched at up to 5 percent, sufficient after further enrichment for several nuclear bombs. Iran is also producing 20 percent material every day, and it announced in June that it planned to triple its output. Halting Iran’s current production of 20 percent material and its projected growth would be significant.
A stockpile of uranium enriched at 20 percent shrinks the potential timeline for breaking out to bomb material from months to weeks. In effect, having uranium enriched at 20 percent takes Iran 90 yards along the football field to bomb-grade material. Pushing it back below 5 percent would effectively move Tehran back to the 30-yard line - much farther from the goal of bomb-grade material. Even more important, extracting from Iran a commitment to a bright red line capping enrichment at 5 percent would stop the Islamic Republic from advancing on its current path to 60 percent enrichment and then 90 percent.
Stopping Iran from enriching beyond 5 percent is not, in itself, a “solution” to its nuclear threat. Nor was Reagan’s proposal to Gorbachev. The question for Reagan was whether we would be better off with the Berlin Wall or without it.
Iran today is the most sanctioned member of the United Nations; it has been the target of five Security Council resolutions since 2006 demanding that it suspend all uranium enrichment. The United States and Europe have organized their own, tougher economic sanctions forbidding businesses from trading with Iranian companies and limiting Iran’s access to financial markets.
But Iran does not require the permission of the United Nations or, for that matter, the United States to advance its nuclear program within its borders. Nor are current or future sanctions likely to dissuade Iran from progressing steadily toward a nuclear weapon.
So far, Obama has essentially continued the Bush administration’s policy toward Iran with one addition: an authentic offer from the start of his administration to begin negotiations. Negotiations, however, have not been feasible because of sharp divisions within Iran. Those rifts were exacerbated after the June 2009 elections, in which Iran’s ruling powers (Supreme Leader Ayatollah Ali Khamanei, Ahmadinejad and the Revolutionary Guard) rigged the presidential vote and then moved to suppress the opposition Green Movement protests. In the last two years, they have tightened control over their society.
Enter Ahmadinejad’s proposal to stop all enrichment at the 5 percent level—without preconditions. Although differences between Ahmadinejad and the supreme leader have become evident, the United States should pay attention to the president’s offer.
Arguments against testing the offer are easy to make. An embattled Ahmadinejad may not be able to deliver. Iran will use negotiations to seek to relax or escape current sanctions. If a deal were reached, it would be more difficult to win international support for the next round of sanctions. An agreement that stops only the 20 percent enrichment could imply a degree of acceptance of Iran’s ongoing enrichment up to 5 percent.
Recognizing all of these negatives, however, the policy question remains: Would the United States be better off with Iran enriching its uranium to 20 percent or without it?
President Obama should act now to test Ahmadinejad’s word.
Most social scientists would like to believe that their profession contributes to solving pressing global problems. There is today no shortage of global problems that social scientists should study in depth: ethnic and religious conflict within and between states, the challenge of economic development, terrorism, the management of a fragile world economy, climate change and other forms of environmental degradation, the origins and impact of great power rivalries, the spread of weapons of mass destruction, just to mention a few. In this complex and contentious world, one might think that academic expertise about global affairs would be a highly valued commodity. One might also expect scholars of international relations to play a prominent role in public debates about foreign policy, along with government officials, business interests, representatives of special interest groups, and other concerned citizens. Yet the precise role that academic scholars of international affairs should play is not easy to specify. Indeed, there appear to be two conflicting ways of thinking about this matter. On the one hand, there is a widespread sense that academic research on global affairs is of declining practical value, either as a guide to policymakers or as part of broader public discourse about world affairs. On the other hand, closer engagement with the policy world and more explicit efforts at public outreach are not without their own pitfalls. Scholars who enter government service or participate in policy debates may believe that they are "speaking truth to power," but they run the risk of being corrupted or co-opted in subtle and not-so-subtle ways by the same individuals and institutions that they initially hoped to sway. The remainder of this essay explores these themes in greater detail.
KS Faculty Research Working Paper Series RWP11-030, John F. Kennedy School of Government, Harvard University.Download PDF
The human race is interconnected as never before. Is that a good thing? Ask the Lords of the Internet—the men running the companies Eric Schmidt of Google recently called “the Four Horsemen”: Amazon, Apple, Facebook, and Google—and you’ll get an unequivocal “yes.” But is it true? In view of the extraordinary economic and political instability of recent months, it’s worth asking if the Netlords are the Four Horsemen of a new kind of information apocalypse.
Don’t get me wrong. I love all that these companies have achieved. I order practically everything except haircuts from Amazon. I write this column on a MacBook Pro. I communicate with my kids via Facebook. It’s 6:55 a.m., and I’ve already run six searches on Google. Did I forget to mention that I’ve already received 29 emails and sent 14?
I also really like the Netlords. They are among the smartest guys on the planet. Yet they are also self-deprecating and sometimes very funny. (OK, not Steve Jobs.) So my question for them is a real question, not some kind of Luddite rant: does the incredible network you have created, with its unprecedented scale and speed, not contain a vulnerability? I’m not talking here about the danger of its exploitation by Islamist extremists or its incapacitation by Chinese cyberwarriors, though I worry about those things too. No, I mean the possibility that the global computer network formed by technologically unified human minds is inherently unstable—and that it is ushering in an era of intolerable volatility.
The communications revolution we are living through has been driven by two great forces. One is Gordon E. Moore’s “law” (which he first proposed in 1965) that the number of transistors that can be placed inexpensively on an integrated circuit doubles approximately every 18 months. In its simplified form, Moore’s Law says that computing power will double every two years, implying a roughly 30-fold increase in 10 years. This exponential trend has now continued for more than half a century and is expected by the techies to continue until at least 2015 or 2020.
The other force is the exponential growth of human networks. The first email was sent at the Massachusetts Institute of Technology in the same year Moore’s Law was born. In 2006 people sent 50 billion emails; last year it was 300 billion. The Internet was born in 1982. As recently as 1993 only 1 percent of two-way telecommunication went through it. By 2000 it was 51 percent. Now it’s 97 percent. Facebook was dreamed up by an über-nerd at my university in 2004. It has 800 million active users today—eight times the number of three years ago.
Russian venture capitalist Yuri Milner sees this trend as our friend (it has certainly been his). As the number of people online doubles from 2 billion to 4 billion over the next 10 years and the number of Internet-linked devices quadruples from 5 billion to 20 billion, mankind collectively gets more knowledge—and gets smarter. Speaking at a conference in Ukraine in mid-September, Milner asserted that data equivalent to the total volume of information created from the beginning of human civilization until 2003 can now be generated in the space of just two days. To cope with this information overload, he looks forward to “the emergence of the global brain, which consists of all the humans connected to each other and to the machine and interacting in a very unique and profound way, creating an intelligence that does not belong to any single human being or computer.”
In the future as imagined by Google, this global brain will do much of our thinking for us, telling us (through our handheld devices) which of our friends is just around the next corner and where we can buy that new suit we need for the best price. And if the best price is on Amazon, we’ll just click once and look forward to its next-day delivery. Maybe it’ll already be there when we get home.
That’s the kind of sci-fi scenario that gets a true nerd out of bed in the morning. But is it just a bit too utopian?
Exhibit one for a contrarian view is the recent behavior of global financial markets, the area of human activity furthest down the road of computerization and automation. According to math wonk Kevin Slavin, algorithms with names like the “Boston Shuffler” are the new masters of the financial universe. Whole tower blocks have been hollowed out to accommodate the computing power required by high-frequency (and very high-speed) trading. So how is this brave new world of robot traders doing?
Well, the VIX index of volatility—Wall Street’s so-called fear gauge, which infers the expected volatility of the U.S. stock market from options prices—reached an all-time high of 80 in the aftermath of Lehman Brothers’ failure and surged back up above 30 in early 2010 and again this summer. Part of this is just a good old-fashioned, man-made financial crisis, of course. But some of the volatility we’ve seen in the past four years is surely attributable to technology: think only of the “flash crash” of May 6 last year, when the Dow Jones industrial average plummeted 9 percent and then rallied in a matter of minutes.
Could the same kind of volatility spread into other markets as these become as wired and as integrated as Planet Finance? The answer must be yes. Consider how Greece’s fiscal woes have destabilized markets across Europe and around the world in recent months. Then there’s the market for consumer durables. We know that the speed with which new technologies have been adopted by American households has increased around eightfold over the past hundred years. But that speed of adoption has its obverse in the speed of obsolescence. Consumers are becoming ever more fickle. Millions bought RIM’s BlackBerry after its advent in 1999. But today the iPhone is the hotter handheld device, and I am far from alone in having a dead BlackBerry in my bottom desk drawer. In late September Amazon launched the Kindle Fire in a bid to challenge the iPad’s dominance of the tablet market. The name is appropriate. The market for such devices is on fire. The whole world is on wi-fire.
In politics, too, online electorates are becoming more volatile. The current race to find a Republican candidate for the presidency is a case in point. Only the other day Sarah Palin was a serious contender. Then Mitt Romney was a shoe-in. Until Rick Perry came along. Until Chris Christie came along. Meanwhile, the number of independent voters who have uncoupled themselves from the traditional parties has reached a historic high of 37 percent. Floating voters are the high-frequency traders of the political market.
Computing power has grown exponentially. So has the human network. But the brain of Homo sapiens remains pretty much the same organ that evolved in the heads of African hunter-gatherers 200,000 years ago. And that brain has a tendency to swing in its mood, from greed to fear and from love to hate.
The reality may be that by joining us all together and deluging us with data, the Netlords have ushered in a new Age of Volatility, in which our primeval emotions are combined and amplified as never before.
We are LinkedIn, but StressedOut. And that “cloud” of downloadable data may yet turn out to be a thundercloud.
It was a scene to curdle liberal blood. A ballroom full of New York hedge-fund managers playing poker…to raise money for charter schools.
That’s where I found myself last Wednesday: at a Texas Hold ’Em tournament to raise money for the Success Charter Network, which currently runs nine schools in some of New York’s poorest neighborhoods.
While Naomi Wolf was being arrested for showing solidarity with the Occupy Wall Street movement, there I was, consorting with the 1 percent the protesters hate. It’s no surprise that the bread-heads enjoy gambling. But to see them using their ill-gotten gains to subvert this nation’s great system of public education! I was shocked, shocked.
Except that I wasn’t. I was hugely cheered up. America’s financial elite needs a compelling answer to Occupy Wall Street. This could be it: educate Harlem…with our poker chips.
Life, after all, is a lot like poker. No matter how innately smart you may be, it’s very hard to win if you are dealt a bad hand.
Americans used to believe in social mobility regardless of the hand you’re dealt. Ten years ago, polls showed that about two thirds believed “people are rewarded for intelligence and skill,” the highest percentage across 27 countries surveyed. Fewer than a fifth thought that “coming from a wealthy family is essential [or] very important to getting ahead.” Such views made Americans more tolerant than Europeans and Canadians of inequality and more suspicious of government attempts to reduce it.
Yet the hardships of the Great Recession may be changing that, giving an unexpected resonance to the Occupy Wall Street movement. Falling wages and rising unemployment are making us appreciate what we ignored during the good times. Social mobility is actually lower in the U.S. than in most other developed countries—and falling.
Academic studies show that if a child is born into the poorest quintile (20 percent) of the U.S. population, his chance of making it into the top decile (10 percent) is around 1 in 20, whereas a kid born into the top quintile has a better than 40 percent chance. On average, then, a father’s earnings are a pretty good predictor of his son’s earnings. This is less true in Europe or Canada. What’s more, American social mobility has declined markedly in the past 30 years.
A compelling explanation for our increasingly rigid social system is that American public education is failing poor kids. One way it does this is by stopping them from getting to college. If your parents are in the bottom quintile, you have a 19 percent chance of getting into the top quintile with a college degree—but a miserable 5 percent chance without one.
Your ZIP code can be your destiny, because poor neighborhoods tend to have bad schools, and bad schools perpetuate poverty. But the answer is not to increase spending on this failed system—nor to expand it at the kindergarten level, as proposed by Nicholas Kristof in The New York Times last week. As brave reformers like Eva Moskowitz know, the stranglehold exerted by the teachers’ unions makes it almost impossible to raise the quality of education in subprime public schools.
The right answer is to promote the kind of diversity and competition that already make the American university system the world’s best. And one highly effective way of doing this is by setting up more charter schools—publicly funded but independently run and union-free. The performance of the Success Charter Network speaks for itself. In New York City’s public schools, 60 percent of third, fourth, and fifth graders passed their math exams last year. The figure at Harlem Success was 94 percent.
The American Dream is about social mobility, not enforced equality. It’s about competition, not public monopoly. It’s also about philanthropy, not confiscatory taxation.
I’ll cheer up even more when I hear those words at a Republican presidential debate. Or maybe next week we should just tell the candidates to shut up and play poker.
“Treat people as they want to be and you help them become what they are capable of being.” —Johann Wolfgang von Goethe
What is the motivating force behind all human interaction—in families, in communities, in the business world, and in relationships from the personal level to the international level? DIGNITY. It is the desire to be treated well. It is an unspoken human yearning that is at the heart of all conflicts, yet no one is paying attention to it.
When dignity is violated, the response is likely to involve aggression, even violence, hatred, and vengeance; the human connection is the first thing to go. On the other hand, when people treat each others with dignity, they feel their worth is recognized, creating lasting and meaningful relationships. Surprisingly, most people have little understanding of dignity. While a desire for dignity is universal, knowing how to honor it in ourselves and others is not.
After working as a conflict resolution specialist for twenty years, I have observed and researched the circumstances that give rise to dignity violations. On the other hand, when the following ten elements of dignity are honored, people feel their dignity has been recognized and that they have been treated well. Relationships flourish under these conditions. The Ten Essential Elements of Dignity
Acceptance of Identity. Approach people as being neither inferior nor superior to you. Give others the freedom to express their authentic selves without fear of being negatively judged. Interact without prejudice or bias, accepting the ways in which race, religion, ethnicity, gender, class, sexual orientation, age, and disability may be at the core of the other people’s identities. Assume that others have integrity.
Inclusion.Make others feel that they belong, whatever the relationship—whether they are in your family, community, organization, or nation.
Safety. Put people at ease at two levels: physically, so they feel safe from bodily harm, and psychologically, so they feel safe from being humiliated. Help them feel free to speak without fear of retribution.
Acknowledgement. Give people your full attention by listening, hearing, validating, and responding to their concerns, feelings, and experiences.
Recognition. Validate others for their talents, hard work, thoughtfulness, and help. Be generous with praise, and show appreciation and gratitude to others for their contributions and ideas.
Fairness. Treat people justly, with equality, and in an evenhanded way according to agreed-on laws and rules. People feel that you have honored their dignity when you treat them without discrimination or injustice.
Benefit of the Doubt. Treat people as trustworthy. Start with the premise that others have good motives and are acting with integrity.
Understanding. Believe that what others think matters. Give them the chance to explain and express their points of view. Actively listen in order to understand them.
Independence.Encourage people to act on their own behalf so that they feel in control of their lives and experience a sense of hope and possibility.
Accountability. Take responsibility for your actions. If you have violated the dignity of another person, apologize. Make a commitment to change your hurtful behaviors.
Our desire for dignity resides deep within us, defining our common humanity. If our capacity for indignity is our lowest common denominator, then our yearning for dignity is our highest. And if indignity tears relationships apart, then dignity can put them back together again.
Our ignorance of all things related to dignity—how to claim our own and how to honor it in others, has contributed to many of the conflicts we see in the world today. This is as true in the boardroom and in the bedroom, as it is in politics and international relations. It is true for all human interaction. If we are to evolve as a species, there is no greater need than to learn how to treat each other and ourselves with dignity. It is the glue that could holds us all together. And it doesn’t stop there. Not only does dignity make for good human relationships, it does something perhaps far more important—it creates the conditions for our mutual growth and development. It is a distraction to have to defend oneself from indignity. It takes up our time and uses up our precious energy. The power of dignity, on the other hand, only expands with use. The more we give, the more we get.
There is no greater leadership challenge than to lead with dignity, helping us all to understand what it feels like to be honored and valued and to feel the incalculable benefits that come from experiencing it.
The leadership challenge is at all levels—for those in the world of politics, business, education, religion, to everyday leadership in our personal lives.
Peace will not flourish anywhere without dignity.
There is no such thing as democracy without dignity, or can there be authentic peace if people are suffering indignities.
Last but not least, feeling dignity’s power—both by honoring it and locating our own inner source of it—sets us up for one of humanities greatest gifts—the experience of being in relationship with others in a way that brings out the best in one another, allowing us to become more of what we are capable of being.
In Cases about Redefining Global Strategy Pankaj Ghemawat and Jordan Siegel have assembled 26 full-length case studies as a resource for active learning about the nature of cross-border differences and strategies. As technology innovation globalizes markets and firms, management education must adopt a truly modern perspective on globalization-one that illuminates differences across borders rather than emphasizing similarities and imposing local models onto far-flung cultures. A new generation of managers and innovators who must compete in a "flat" world cannot succeed while following a one-size-fits-all approach to global strategy. Pankaj Ghemawat, Professor of Strategy at Spain's IESE Business School and author of World 3. and Redefining Global Strategy, and Harvard Business School Professor Jordan Siegel represent a new era of thinking in global strategy. This carefully chosen selection of classics and new material from Harvard Business Publishing also includes an introduction and six introductory module notes that identify key themes and strategic concepts explored in the cases. Though attuned to the format of an MBA course, the cases and text may also be used individually or in programs outside the strategy curriculum.
During the three days that the Greek Parliament was discussing and voting on the latest round of austerity measures, 138 police officers were injured, more than 500 protesters were hospitalized with breathing problems caused by the use of tear gas by the police, Syntagma metro station resembled a wartime hospital, tens of protesters were wounded, while 46 demonstrators were taken to police stations and 11 of them arrested on June 29 alone.
The police brutality was unprecedented according to Skai news and many witness accounts. Through Twitter, Facebook, email, and text messages, the Greek protesters spread the word of indiscriminate police beatings.
A peaceful protester injured by the police called a radio station to express his consternation at the attack he suffered at the hands of the police, “who are supposed to be there to protect citizens.” He further argued that he was there to “protest for Greece and its rights, so why was I attacked by another Greek?” Another citizen claimed that he was almost beaten by motorcycle police while walking around recording the events with a camera and that what saved him was an old expired press pass. At the same time, families were calling in reporting brute force without any provocation on their part. Many citizens, especially older ones, claim that they tried to talk to the police officers and dissuade them from using chemicals against simple protesters but to no avail.
Amnesty International had already condemned Greece for the use of force against protesters on June 15. June 29 was much worse.
There are several possible explanations for why Greek police used such force. One view is that it was hard for them to tell which were peaceful demonstrators and which were troublemakers. The police might have felt threatened by the mayhem. They may have determined that if they did not strike first, the protesters would attack them.
An alternative explanation is that the government wanted to break the “Indignant” movement using force. The vast majority of protesters saw the events as a strategy employed by the state to keep them from protesting. After all, most protesters were family types who were not going to remain there under such circumstances. And as expected, they fled the scene.
The ones left were younger, more determined and enraged and, again expectedly, engaged in street fights with the police. Thus, what was a peaceful demonstration that challenged the legitimacy of the government, if not the Parliament as a whole, turned into the “usual” fight between the “known unknowns” -- as they are often referred to -- and the police forces.
On top of this, some believe the government planted provocateurs among the peaceful protesters to justify the escalation. Regardless of whether this hypothesis is true or not, the mere perception is damaging to the reputation of the government and the police. Let’s hope that these events have not killed peaceful protest.
All this violence was happening while those inside the Parliament had just voted in favor of the new austerity measures. Many think that it was much more convenient for the government that people were discussing police brutality rather than the midterm plan that was being voted on. Regardless of motivation, that was indeed the case. The next day, June 30, when the government had to vote on the implementation law of the plan, there was hardly anyone in the ruins of Syntagma Square and the discussion within the Parliament had turned into a discussion about the quality of democracy and the right of people to demonstrate freely.
Public Order Minister Christos Papoutsis, who is ironically now called the citizens’ protection minister, made an analytical distinction between governmental and police responsibility.
The head of the main opposition New Democracy party, Antonis Samaras, suggested that the scenes raised questions about the existence of state-sponsored provocateurs. However, ND deputy Manolis Kefaloyiannis later rushed to congratulate the police officers and, together with Health Minister Andreas Loverdos, repeated the high number of police officers wounded during the street battles.
The leader of the right-wing nationalist Popular Orthodox Rally (LAOS) party, Giorgos Karatzaferis, suggested that special recognition should be given to the Evzones presidential guards because they remained in position before the Parliament during the fighting despite the fact that tears were running down their faces due to the chemicals used against the protesters.
Dora Bakoyannis, the head of the Democratic Alliance political grouping who was expelled from the main opposition ND party in 2010, commented only on the destruction of Hania MPs’ offices by a raging crowd.
The parties of the left were furious and suggested that the democratic foundations of the political system have cracked.
Of course, in the end the vote passed.
Killing terrorists with drones is great politics. To the question, “Is it legal?” a natural answer might well be, “Who cares?”
But the legal justifications in the war on terrorism do matter - and not just to people who care about civil liberties. They end up structuring policy. As it turns out, targeted killing, now the hallmark of the Barack Obama administration’s war on terrorism, has its roots in rejection of the legal justifications once offered for waterboarding prisoners.
The leaking of the basic content (but not the text) of an Obama administration memo authorizing the drone strike that killed US citizen Anwar Al-Awlaki therefore calls for serious reflection about where the war on terrorists has been - and where it is headed next.
The George W. Bush administration’s signature anti-terror policy after the September 11 attacks (apart from invading countries) was to capture suspected terrorists, detain them, and question them aggressively in the hopes of gaining actionable intelligence to prevent more attacks.
In the Bush years, after the CIA and other agencies balked at the interrogation techniques being urged by Vice President Dick Cheney, the White House asked the Department of Justice to explain why the most aggressive questioning tactics were legal. Lawyers at the Office of Legal Counsel—especially John Yoo, now a professor at the University of California at Berkeley—produced secret memos arguing that waterboarding wasn’t torture.The Torture Memos
What was more, the memos maintained, it didn’t matter if it was torture or not, because the president had the inherent constitutional authority to do whatever was needed to protect the country.
Some of the documents were leaked and quickly dubbed “the torture memos.” A firestorm of legal criticism followed. One of the most astute and outraged critics was Marty Lederman, who had served in the Office of Legal Counsel under President Bill Clinton. With David Barron, a colleague of mine at Harvard, Lederman went on to write two academic articles attacking the Bush administration’s theories of expansive presidential power. Eventually, Jack Goldsmith, who led the Office of Legal Council in 2003–2004 (and is now also at Harvard), retracted the most extreme of Yoo’s arguments about the president’s inherent power.
In the years leading to the 2008 election, all this technical criticism of the Bush team’s legal strategy merged with domestic and global condemnation of the administration’s detention policies. The Supreme Court weighed in, finding that detainees were entitled to hearings and better tribunals than were being offered. As a candidate, Obama joined the bandwagon, promising to close the prison at Guantanamo Bay, Cuba, within a year of taking office.
Guantanamo is still open, in part because Congress put obstacles in the way. Instead of detaining new terror suspects there, however, Obama vastly expanded the tactic of targeting them, with eight times more drone strikes in his first year than in all of Bush’s time in office. Barron and Lederman, the erstwhile Bush critics, were appointed to senior positions in the Office of Legal Counsel—where they wrote the recent memo authorizing the Al-Awlaki killing.
What explains these startling developments? If it’s illegal and wrong to capture suspected terrorists and detain them indefinitely without a hearing, how exactly did the Obama administration decide it was desirable and lawful to target and kill them?
The politics were straightforward. Obama’s team observed that holding terror suspects exposed the Bush administration to harsh criticism (including their own). They wanted to avoid adding detainees at Guantanamo or elsewhere.
A Father’s Appeal
Dead terrorists tell no tales—and they also have no lawyers shouting about their human rights. Before Al-Awlaki was killed, his father sued the government for putting the son on its target list. The Obama Justice Department asked the court to dismiss the claim as being too closely related to government secrets. The court agreed—a result never reached in all the Guantanamo litigation. Anwar Al-Awlaki now has no posthumous recourse.
In the bigger picture, Obama also wanted to show measurable success in the war on terrorism while withdrawing troops from Iraq and Afghanistan. But even here the means were influenced by legal concerns.Osama bin Laden is the best example. One suspects that the US forces who led the fatal raid in Abbottabad almost certainly could have taken him alive. But detaining and trying him would probably have been a political disaster. So they shot him on sight, as the international law of war allows for enemies unless they surrender.The authority for targeted killing—as expressed in the Lederman-Barron memo—offers the legal counterpart to the political advantages of the Obama targeting policy. According to the leaks, the memo holds that the U.S. can kill suspected terrorists from the air not because the president has inherent power, but because Congress declared war on Al-Qaeda the week after the September 11 attacks.
The logic is that once Congress declares war, the president can determine whom we are fighting. The president found that Yemen-based Al-Qaeda in the Arabian Peninsula, which didn’t exist on September 11, had joined the war in progress. He determined that Al-Awlaki was an active member of the Yemeni groups with some role in planning attacks. And, the memo says, it’s not unlawful assassination or murder if the targets are wartime enemies.
From a formal legal standpoint, Lederman and Barron can claim consistency with their attacks on the Bush administration. They relied on Congress and international law; Yoo’s “torture memos” didn’t.
But this argument misses the more basic point: Most critics rejected Bush’s policies not on technical grounds based on the Constitution, but because they thought there was something wrong with the president acting as judge and jury in the war on terrorism.
No Defense Allowed
Anwar al-Awlaki was killed because the president decided he was an enemy. Like the Bush-era Guantanamo detainees, he had no chance to deny this—even when his father tried to go to court while he was still alive.
Naturally, a uniformed soldier in a regular war also wouldn’t get a hearing. But like the Guantanamo detainees, Al-Awlaki wore no uniform. Nor was he on a battlefield, except according to the view that anywhere in the world can be the battlefield in the war on terrorism.
Al-Awlaki might have maintained that he was merely a jihadi propagandist exercising his free speech rights as a U.S. citizen. Which might well have been a lie. Yet we have only the president’s word that he was an active terrorist—and that is all we will ever have. The future direction of the policy is therefore clear: Killing is safer, easier and legally superior to catching and detaining.
Sitting beside Al-Awlaki when he was killed was another US citizen, Samir Khan, who was apparently a full-time propagandist, not an operational terrorist. Khan was, we are told, not the target, but collateral damage—a good kill under the laws of war.
Legal memos are weapons of combat—no matter who is writing them.