This paper seeks to understand the factors that cause disputes at the World Trade Organization to move from the negotiation stage to the panel stage. We hypothesize that transfer payments between states are costly to arrange and that the lowest–cost transfers are those that relate directly to the issue in dispute. This implies that when the subject matter of the dispute has an all–or–nothing character and leaves little room for compromise (for example, health and safety regulations), the parties? ability to reach an agreement through the use of transfers is restricted. In contrast, if the subject matter of dispute permits greater flexibility (for example, tariff rates), the parties can more easily structure appropriate transfer payments through adjustments to the disputed variable. We conduct an empirical test of this hypothesis, finding support for it among democratic states.
International borders are usefully conceptualized as international political institutions that provide joint gains for the polities whose jurisdictions they distinguish. Far from irrelevant in an age of globalization, settled political borders help to make economic integration possible. But the international relations literature that focuses on territorial disputes has put too much emphasis on territory per se and far too little on the institutions of boundary settlement that produce joint gains. Students of international politics have cast the issues relating to territorial settlement in overly zero–sum terms, and may very well be missing an important impetus to conflict resolution in many cases. This paper shows empirically for the case of Latin America that territorial and border disputes entail opportunity costs (operationalized here as bilateral trade foregone). Mutually accepted borders mitigate these costs by reducing uncertainty, transactions costs, and other bilateral externalities of disputing. Theories of territorial settlement should take into account the possibility of such joint gains in their models of state dispute behavior.
Wars fought to redress grievous wrongs or put a stop to evil have been termed "just wars." The concept has its origins in classical and theological philosophy and was explicit in the Christian ethics of Saint Augustine. Just war theory describes narrow circumstances and tight constraints on the ends and means that are required to apply this term. Although Western law has slowly come to accept war as an inevitable instrument of national policy and turned its attention to setting standards for the conduct of war, important echoes of just war theory remain. A distinction was made at Nuremberg, and later embedded in articles 2 and 51 of the United Nations charter, between unacceptable aggressive war and acceptable wars of self defence. Contemporary arguments about particular wars still rely on the seven main principles of just war theory. Application of these principles to the conflict in Afghanistan does not settle the debate but it might help to structure the discussion.
This research note presents evidence on the conditions that influence governments? decisions to commit themselves to international human rights regimes. Are governments pressured by powerful state actors to make such commitments, as some realists have suggested? Or rather do governments to cede the right to review internal rights policies to external authorities as the result of socialization through persuasion? What role do domestic political conditions and institutions play? This research note offers empirical evidence that addresses these issues. Using global data relating to the six "core" UN human rights treaties, I find the strongest evidence of external socialization, although governments presiding over common law legal systems tend to resist formalizing their rights commitments in external treaty form. There is little evidence of democratic "lock–in" using these data, although this remains a persuasive interpretation of the origins of the European human rights regime.
When I first arrived at the White House in September 1996, I had no idea that one of the issues on which I would spend the most time during my period as a Member of President Clinton?s Council of Economic Advisers was global climate change. But Under Secretary of State Tim Wirth had the month before announced a major change in policy: that the United States would in multilateral negotiations now support "legally binding" quantitative targets for the emission of greenhouse gases. This left 15 months for the US Administration to decide what kind of specifics it wanted, at the Third Conference of Parties of the UN Framework Convention on Climate Change (UNFCCC), scheduled for November 1997 in Kyoto. Because other countries take their cue from the superpower (whether it is to support or oppose US positions), this countdown engendered a certain amount of suspense: What specifically would the U.S. propose at the Kyoto Conference, most notably regarding how the numerical targets should be determined? Outsiders demanded to know –with particular tenacity in the case of the U.S. Congress, who feared the worst. I was a member of a large inter–agency group that worked intensively on what was to become the Kyoto Protocol.
I never thought that the agreement had a large chance of being ratified by the U.S. Senate, or of coming into force in a serious way. There were too many unbridgeable political chasms, as I will explain. Furthermore I understand the reasons why almost all economists, at least in the United States, disapprove of the Kyoto Protocol. Nevertheless, I am prepared to defend the Clinton version of the treaty, and I believe it was a step in the right direction.
I will begin by noting that the weight of scientific opinion seems indeed to have concluded that the Earth is getting warmer, that increasing concentrations of carbon dioxide and other greenhouse gases are the major cause, and that anthropogenic emissions are in turn responsible. I am not a scientist. But the latest IPCC report concludes "The globally averaged surface temperature is projected to increase by 1.4 to 5.8 degrees Celsius" over the period 1990 to 2100, and "global mean sea level is projected to rise by 0.09 to 0.88 metres." The evidence has become clearer over the last ten or twenty years. President George Bush, the Second, made a big mistake when he initially allied himself with the minority of disbelievers. It was a political mistake if nothing else. Even granting that the incoming administration in 2001 did not want to pursue Kyoto, it was foolish and unnecessary for the White House to dismiss the climate change problem.
This paper will take as given that the problem of global climate change is genuine, and is sufficiently important to be worth addressing by steps that are more than cosmetic. Because the externality is purely global – a ton of carbon emitted into the air, no matter where in the world, has the same global warming potential – the approach must be multilateral. Individual countries will not get far on their own, due to the free rider problem. Specifically, multilateral negotiations have since the Rio Summit of 1992 proceeded under the UNFCCC.
The paper will summarize major decisions that the Clinton Administration had to make, and why it made them as it did. What were the quantitative limits on emissions to be? How would greenhouse gases other than carbon dioxide be treated? Would trading across time or across countries be permitted? And so on. In my time in the government, I was surprised to discover that policy makers often must make such technical–sounding decisions with relatively little help from the body of technical knowledge and opinion outside the government. It is not just that academic research is too abstract to be of much direct help with the minutia of specific policy decisions. The pronouncements of think tanks and op–ed writers also ignore practical complexities, because they seek to make big points for general audiences. We were largely on our own.
It appears likely that the number of currencies in the world, having proliferated along with the number of countries over the past 50 years, will decline sharply over the next two decades. The question I plan to pose here is: where, from an
economic point of view, should we aim for this process to stop? Should there be a single world currency, as Richard Cooper (1984) boldly envisioned? Should there remain multiple major currencies but with a much stricter arrangement among them for stabilizing exchange rates, as say Ronald McKinnon (1984) or John Williamson (1993) recommended? Building on Maurice Obstfeld and Rogoff
(2000b, d), I will argue here that the status quo arrangement among the dollar, yen, and euro (which I take to be benign neglect) is not far from optimal, not only for now but well into the new century. And it would remain a good system even if political obstacles to achieving greater monetary policy coordination (or even a common world currency) could be overcome. Again, this is not a paper on, say,
the pros and cons of dollarization for small and medium-sized economies, but rather on arrangements among the core currencies.
Any blueprint for the future core of the world currency system involves some crystal-ball gazing. But at the same time, recent research in international macroeconomics offers several important insights that can help inform the discussion.
My purpose in this essay is to raise some questions about what is involved in
research on political violence. Since 1995 I have conducted ethnographic research in rural
villages throughout Ayacucho, the region of Peru most heavily affected by the war between
the guerrilla group Sendero Luminoso, the rondas campesinas (armed peasant patrols) and the
Peruvian armed forces. A key factor motivating my research was a desire to write against
the culture of violence arguments that were used to "explain" the war. The concept of a"culture of violence" or "endemic violence" has frequently been attributed to the Andean
region, particularly to the rural peasants who inhabit the highlands. I wanted to understand
how people make and unmake lethal violence in a particular social and historical context, and
to explore the positioning and responsibilities of an anthropologist who conducts research in
the context of war.
There are many well–developed theories that explain why governments redistribute income, but very few can explain why this often is done in a socially inefficient form. In the theory we develop, compared to efficient methods, inefficient redistribution makes it more attractive to stay in or enter a group that receives subsidies. When political institutions cannot credibly commit to future policy, and when the political influence of a group depends on its size, inefficient redistribution is a tool to sustain political power. Our model may account for the choice of inefficient redistributive policies in agriculture, trade, and the labor market. It also implies that when factors of production are less specific to a sector, inefficient redistribution may be more prevalent.
Among countries colonized by European powers during the past 500 years, those that were relatively rich in 1500 are now relatively poor. We document this reversal using data on urbanizations patterns and population density, which we argue, proxy for economic prosperity. This reversal weighs against a view that links economic development to geographic factors. Instead, we argue that the reversal reflects changes in the institutions resulting from European colonialism. The European interventions appears to have created an "institutional reversal" among these societies, meaning that Europeans were more likely to introduce institutions encouraging investment in regions that were previously poor. This institutional reversal accounts for the reversal in relative incomes. We provide further support for this view by documenting that the reversal in relative incomes took place during the late eighteenth and early nineteenth centuries, and resulted from societies with good institutions taking advantage of the opportunity to industrialize. This paper documents a reversal in relative incomes among the former European colonies. For example, the Mughals in India and the Aztecs and Incas in the Americas were among the richest civilizations in 1500, while the civilizations in North America, New Zealand, and Australia were less developed. Today the United States, Canada, New Zealand, and Australia re an order of magnitude richer than the countries not occupying the territories of the Mughal, Aztec, and Inca Empires.
I extend the standard materialistic rational choice model of conflict to consider groups. In particular, I consider how the aggregate amount of conflict in society depends on which groups form and oppose each other. The study is motivated by empirical findings about the relationship between inequality, conflict, and economic development. I focus on a salient comparison: ethnic groups vs. social classes. I show that, contrary to the conventional wisdom, class conflict is not necessarily worse than ethnic conflict. In fact, ethnic conflict is general worse when the distribution of income is more equal. I also investigate the impact of the fact that while ethnicity is immutable, since there is social mobility, class is not. I show that the direct impact of mobility of conflict is as conventionally believed, but that there are important indirect effects which make the net effect ambiguous.
We develop a theory of political transitions inspired by the experiences of Western Europe and Latin America. Nondemocratic societies are controlled by a rich elite. The initially disenfranchised poor can contest power by threatening revolution, especially when the opportunity cost is low, for example during recessions. The threat of revolution may force the elite to democratize. Democracy may not consolidate because it is redistributive, and so gives the elite an incentive to mount a coup. Highly unequal societies are less likely to consolidate democracy, and may end up oscillating between regimes and suffer substantial fiscal volatility.
The literature on the role of religious institutions in ethnic conflict does not answer the question of whether these institutions support violence or the status quo. From a resource mobilization perspective, religious institutions generally have the organizational resources to facilitate opposition to the status quo. However, it is also clear that most religions at different times have supported both violence and the status quo. An analysis of 105 ethno–religious minorities using data from the Minorities at Risk project shows that religious institutions tend to inhibit peaceful opposition unless there is a sufficient level of perceived threat to the religious institutions or the religion itself, in which case religious institutions tend to facilitate political opposi–tion among ethno–religious minorities. However, the decision to violently oppose a regime is based mostly on secular factors including the desire for some form of autonomy or independence and political discrimination against the ethno–religious minority.
Studies in Conflict and Terrorism, Vol. 22, Iss. 2 (1999): 119-139.
Samuel Huntington's controversial "clash of civilizations" thesis posits that, among other things, the extent of both international and domestic conflict between 'civilizations' will increase with the end of the Cold War. This is expected to be especially true of clashes involving the Western and Islamic civilizations and even more so for clashes between these two civilizations. In this article the author, using the Minorities at Risk dataset along with independently collected variables, tests these ethnic conflict propositions of Huntington's. The results from the author's analysis are examined from there perspectives: globally, from the perspective of the Islamic civilization, and from the perspective of the Western civilization. Globally, there has been little change in Islamic involvement in civilizational ethnic conflict since the end of the Cold War. However, from a Western perspective, the proportion of civilizational conflicts involving Western groups that are with Islamic groups increased dramatically after the end of the Cold War. Thus, the results show that if one focuses narrowly on the perspective of the Western civilization, there is some support for Huntington's claims regarding Islam, but not for a general increase in civilizational conflict. However, from the perspective of the Islamic civilization and from a broader global perspective, ther is little support for Huntington's argument.
Journal of Peace Research, Vol. 38, no. 4(2001): 459-472.
Despite some success stories in the 1960s and early 1970s, Africa is poor and getting poorer. There is also an almost universally pessimistic consensus about its economic prospects. This consensus started to emerge in recent empirical work on the determinants of growth with Barro's (1991) discovery of a negative "African Dummy" and was summed up by Easterly and Levine?s (1997) title, "Africa's Growth Tragedy." Table 4.1 collects some familiar comparative evidence on Africa?s economic performance. The average sub–Saharan African country is poorer than the average low–income country and getting poorer. Indeed, the average growth rate has been negative since 1965, and there is approximately a 35–fold difference between the per capital income level of the average sub–Saharan country and the United States.
Against this background of poor performance, one African country, Botswana, has not only performed well, but better than any other country in the world in the last 35 years. In table 4.2 we examine the facts about Botswana in both an African and more general context. Botswana had a PPP–adjusted income per capital of $5,796 in 1998, almost four times the African average, and between 1965 and 1998, it grew at an annual rate of 7.7 percent.
The terrorist attacks that destroyed the World Trade Center and damaged the Pentagon triggered the most rapid and dramatic change in the history of U.S.foreign policy. On September 10, 2001, there was not the slightest hint that the United States was about to embark on an all–out campaign against "global terrorism." Indeed, apart from an explicit disdain for certain multilateral agreements and a fixation on missile defense, the foreign policy priorities of George W. Bush and his administration were not radically different from those of their predecessors. Bush had already endorsed continued NATO expansion, reluctantly agreed to keep U.S. troops in the Balkans, reaffirmed the existing policy of wary engagement with Russia and China, and called for further efforts to liberalize global markets. The administration's early attention focused primarily on domestic issues, and newinternational initiatives were notably absent.
This business–as–usual approach to foreign policy vanished on September 11. Instead of education reform and tax cuts, the war on terrorism dominated the administration's agenda. The United States quickly traced the attacks to al–Qaeda — the network of Islamic extremists led by Saudi exile Osama bin Laden — whose leaders had been operating from Afghanistan since 1996. When the Taliban government in Afghanistan rejected a U.S. ultimatum to turn over bin Laden, the United States began military efforts to eradicate al–Qaeda and overthrow the Taliban itself. The United States also began a sustained diplomatic campaign to enlist foreign help in rooting out any remaining terrorist organizations "with global reach." U.S.officials emphasized that this campaign would be prolonged and warned that military action against suspected terrorist networks might continue after the initial assault on al–Qaeda and its Taliban hosts.
This article analyzes how the campaign against global terrorism alters the broad agenda of U.S.foreign policy. I focus primarily on the diplomatic aspects of this campaign and do not address military strategy, homeland defense, or the need for improved intelligence in much detail. These issues are obviously important but lie outside the bounds this essay.
I proceed in three stages. The first section considers what the events of September 11 tell us about the U.S. position in the world and identifies four lessons that should inform U.S. policy in the future. The second section explores how the campaign on terrorism should alter the foreign policy agenda in the near–to–medium term: What new policies should the United States pursue, and what prior goals should be downgraded or abandoned? The third section addresses the long–term implications, focusing on whether the United States will be willing to accept the increased costs of its current policy of global engagement. I argue that this decision will depend in part on the success of the current campaign, but also on whether the United States can make its dominant global position more palatable to other countries.
As the People?s Republic of China (PRC or China) seeks to use law to address environmental problems, it faces daunting challenges, in terms both of the magnitude of environmental degradation it is experiencing and the capacity of its legal institutions. Pollution levels in the major cities in the PRC are among the highest on earth. Epidemiological studies indicate that the concentration of airborne particulates is two to five times the maximum level deemed acceptable by the World Health Organization. A noted World Bank study based on "conservative" assumptions estimates that as of the mid–1990s "urban air pollution costs the Chinese economy US$32.3 billion annually in premature deaths, morbidity, restricted activity, chronic bronchitis, and other heath effects." And new scholarly work suggests that the "health impacts fall disproportionately on women and children."
China?s lawmakers have not ignored these problems. The PRC has in recent years sought to enlist the law to address its environmental ills. In 1995 and then again in 2000, China undertook significant revisions of its principal air pollution law, while throughout the decade of the 1990s it promulgated discrete measures concerning coal production, acid rain, and associated matters. To date, these legal changes have at best had a minor impact on the Chinese environment, but as we know from Bruce Ackerman and William Hassler?s classic study of the making of air pollution law in the United States, "Clean Coal/Dirty Air," even in highly–developed legal systems, efforts through law to address such issues pose massive challenges.
This article examines the 1995 revision of the Air Pollution Prevention and Control Law (the 1995 APPCL). The struggles attending that revision warrant our attention not only because of the gravity of China?s air pollution, but for the revealing window they provide onto Chinese legislative development more generally. Through it, we can better understand the inner workings of what is, under the Chinese constitution, the supreme organ of state, the National People's Congress (NPC); the interface of the NPC with other organs of state, national and sub–national; and ultimately, the relationship of the Chinese state to its people. This has much to tell us about the particular limitations that prevented the 1995 APPCL from achieving more, the difficulties confronting overall efforts to deploy law to improve the Chinese environment, the growing politicization of environmental matters, and the challenges that the Chinese state faces as it attempts both to represent popular interest in more transparent governmental institutions and also to deepen its engagement in the international community as it prepares to accede to the World Trade Organization.
EL PARTEAGUAS MUNDIAL DE FINES DE LOS OCHENTA y comienzos de los noventa no dejó de afectar a Cuba. El derrumbe de los regímenes comunistas europeos y, en particular, de la Unión Soviética puso fin también a una larga etapa de la historia de Cuba comenzada en 1960. En su sistema político, económico y social, Cuba había sido distinta del resto de América durante las últimas tres décadas de la Guerra Fría en Europa. Con la desaparición de su principal aliado internacional, el gobierno de Cuba, acorralado, se vio obligado a iniciar un viraje en la conducción de su política nacional e internacional. Ese viraje, sin embargo, fue un golpe de timón de un buque anclado, cuyo piloto reorienta el barco sin alterar su equilibrio a pesar de un fuerte oleaje.
A Research Report from the Organizing Religious Work Project, Hartford Institute for Religion Research Hartford Seminary
The well–being of every community depends on harnessing the contributions of its citizens. Sustaining viable communities requires places where people can gather, work together, and learn to trust one another – where we generate what Robert Putnam has called "social capital."1 We depend on the neighborhood associations and political action groups, parent associations and leagues of civil rights activists, as well as the churches, synagogues, and mosques that provide places of concern, belonging and action. This is a report on the work being done by such religious organizations and their community partners in seven representative communities in the U.S..
In a hierarchical relationship, by definition, the superior is entitled to exert influence on the subordinate and the subordinate is obligated to accept the superior’s influence. These rights and duties, however, are not unlimited. Ethical use of influence presupposes certain intraorganizational and extraorganizational limits on the demands and requests that the superior is entitled to make and that the subordinate is expected – at times, in fact, permitted – to carry out. To set the stage for discussion of such limits, I first review some of the moral principles according to which influence attempts must be assessed and present a typology that distinguishes among different means of influence.