If the 20th century was consumed by the global struggle between incompatible ideologies—fascism, communism, and democratic capitalism—the 21st century will be consumed by the epic challenge of creating and sustaining viable, effective states. Viable, effective states are the only form of collective governance that has a proven ability to contain and reverse a trajectory of growing entropy driven in part by the illicit networks described throughout this book. States have successfully fought off powerful illicit adversaries in all regions of the globe, from Colombia in South America to the Philippines in Asia. Some authors have argued that the state itself is a significant contributor to growing global entropy, and that is possibly true.1 Yet, the global state system has enabled great prosperity and security, and ready alternatives to state-based governance are few.
Over the past 25 years, the threat to international security posed by failed and failing states has been widely acknowledged. Scholar Robert Rotberg writes that such states “pose dangers not only to themselves and their neighbors but also to peoples around the world.”2 Indeed, in 2002, it was explicitly stated by then President George W. Bush in his introduction to the National Security Strategy of the United States “…that weak states, like Afghanistan, can pose as great a danger to our national interests as strong states.”3 Only recently, however, have analysts begun to examine the contributions to state weakness and failure from converging terrorist, criminal, and insurgent networks.
Previous chapters in this book have described visions of a world experiencing pandemic state failure. This chapter will attempt to persuade not only that a rule-based system of sovereign states provides a hospitable environment for human endeavor, but that it is the best remedy against the destructive effects of illicit actors and networks. The authors will argue that the unsuccessful cases of Iraq and Afghanistan should not be permitted to discredit the state-building enterprise, as they were, in fact, cases in which no coherent state-building strategy was ever developed, attempted, or sustained. The chapter will provide evidence supporting two theses; that successful state-building is possible; and that external intervention, though never without risk, if appropriately designed, can effectively support successful state-building efforts. The final section will identify and describe a range of critical operational approaches to successful state-building to be considered by either local or external agents. These include:
- establishing a monopoly of the legitimate use of force;
- establishing the rule of law and mechanisms for articulation, adjudication, and redress of grievances;
- prohibiting and punishing corruption;
- creating an inclusive national narrative promoting citizenship;
- nurturing an economic environment in which citizens can meet their economic needs;
- establishing mechanisms for managing state assets and budgets and securing state financing based on tax-paying constituents;
- creating accessible mechanisms for interaction between civil society and the state;
- investing in human capital; and
- creating a reliable and competent civil service to administer official state functions and manage state assets.
For those critical of the modern state and its failures, there is much to lament; from the failure of many of the world’s states to provide citizens with fundamental public goods—not least security and safety—to the collective failure of the state system to effectively address global threats, such as climate change and food insecurity. States big and small, rich and poor, advanced and underdeveloped have left much to be desired.
For the sake of objectivity, however, let us look at the other side of the ledger. The great innovation of Westphalia, the field in Germany where 100 delegations met in 1648 to negotiate the end of the Thirty Years’ War that had devastated Central Europe, was not the invention of the state. Functioning bureaucratic administration had existed for eons in various forms and regions.4 The innovation of Westphalia was a rule-based system of states, based on the concept of sovereign equality. Diverse in wealth, populations, endowment, cultures, and religions, the Westphalian states were equal in their sovereignty, each with a right to a voice, to its own religion, customs, and practices with regard to its citizens. This resulted in a framework of widely respected norms for interstate behavior—to be sure, never for long unchallenged—that permitted states to turn greater attention toward their internal development.
While the Peace of Westphalia did not end war, let alone conflict, it did establish a culture of rule-based international relations conducive to unprecedented advancements in the human condition—as measured by the most widely accepted indicators of human development. Human longevity which had lingered between 30 and 40 years for millennia shot up, from 35 in 1650 to 67 years today.5 Income, which hovered far and wide at subsistence levels—barely enough to survive—catapulted from $615 to $10,000 per year between 1700 and 1900.6 Finally, literacy, long the preserve of the rich or of the clergy, reached over 80 percent by the 2000s, from about 35 percent in 1650.7 Versions of these three indicators are the most commonly used today to measure human development.8 Can the innovation of the Westphalian state system lay claim to these unprecedented advances? Perhaps not exclusively—but the conditions and rules of interstate behavior produced an environment conducive to these impressive achievements. Building upon pre-Westphalian innovations in bureaucratic governance and the Westphalian innovation, this system expanded, until the post-World War II order extended it on a global basis with investments to attain sovereign self-governance in all countries.
The state is by no means the only, let alone the earliest, set of institutions established to order political relations among populations. From time immemorial, political activity has been governed by more archetypal structures, such as the family, tribe, and clan—forms which are arguably so profoundly embedded in our human nature, that they continually resurface.9 These forms of political organization evolved over centuries in some regions into various iterations of autocracy such as monarchy, kingdoms, and empires. At other times, in other places, political life has been governed by religious orders, ranging from the micro-monastic orders of medieval Christianity to the great caliphates of Islam. The 20th century witnessed a radical effort to organize political life along economic class lines; but by 1991, the great communist/socialist experiment failed. And the others—the clan, tribe, feudal, monarchy, caliphate, or empire—have failed to deliver the public goods needed for people to prosper in the 21st century. Thus, paraphrasing Churchill, the state is perhaps the worst form of government—except for all the others.
We would go well beyond Churchill; there is a great deal to be proud of in the accomplishments of modern states—but as earlier chapters in this book have made clear, none of these accomplishments can be taken for granted. There is much to lose, and the prospect of losing much or even all we have achieved in recent centuries—if history is any guide—is not as remote as some might think.10 Great advances of civilizations past have been reversed by war and by decay. The risk of losing these extraordinary and unprecedented achievements is no less than a return to the life characterized by Hobbes so famously as nasty, brutish, and short.11 The tremendous human progress we have experienced in recent centuries is grounded in the rules-based system of sovereign equality among states. The best hope to retain and build on this progress is to preserve the system of effective sovereign states, ensure that states can perform required functions, and adapt this system to the needs and realities of the 21st century—confronting globalization as well as the empowered citizen with higher expectations. Certainly a new kind of state will be required—innovative and adaptive to meet the challenges of the emerging threat environment, rather than just a recreation of the 19th-century state.
Our contemporary state system is far from complete. The illusion of sovereignty, derided by one scholar as “organized hypocrisy,” has created in many minds a misleading world map of neatly divided countries (each with clear boundaries and its own bright color). In terms of governance, however, these many countries are, in fact, significantly diverse in legitimacy and effectiveness.12 To what extent does this world map, with which we are all familiar, represent realities across continents? South Korea differs vastly from Papua New Guinea, Botswana from South Sudan, and Switzerland from Greece. In 1994, Robert D. Kaplan wrote of the “lies of mapmakers,” describing the inadequacy of “this inflexible, artificial reality.”13 Of the 193 members of the United Nations general assembly, how many can be characterized as exercising full and effective sovereignty? If sovereignty is to mean not only recognition by the family of nations, but effectively governing the population and territory within a specific set of borders, balancing the budget and controlling corruption, policing crime, providing opportunities for citizens, and guarding traffic through those borders; how many are truly sovereign?14 The Economist reported in 2013 that 65 countries were at a high or very high risk of social unrest, 19 more than in 2009.15 Depending on the indicators used for sovereignty, the count of less than sovereign states could range from 40 to 140; either way leaving a considerable number outside the domain of sovereignty, indeed leaving the world with a substantial sovereignty deficit.16
As others in this book have argued the limited domain of effectively governed states—the Westphalian domain—is under attack. The illicit networks described throughout this book, in collusion with allied agents in faltering—and even within robust—states, work relentlessly to subvert the integrity of the system. They prey on states, both weak and strong, like parasites, and infect undergoverned spaces, turning these into black holes that attract all manner of criminal activity.17
We know of only one antidote to this degenerative threat, and that is the establishment of states capable of effective governance throughout the world. But many contemporary states are not on a trajectory toward legitimate and effective governance. States such as Honduras, the Democratic Republic of the Congo, and Iraq may have the desire, but lack the capacity to withstand countervailing forces. Their consolidation as legitimate and effectively governed states will require a concerted effort by capable, well-resourced, and well-intentioned external partners working with internal leaders, institutions, and constituencies. Such a concerted effort can only succeed if leaders have the will—the commitment and courage—to succeed in this challenge.
State-building regrettably has been discredited over the past two decades.18 Both the cost and difficulty inherent in a coherent and realistic approach to state-building have understandably soured policymakers and budget setters to the proposition of trying to stand up states. Indeed, the catastrophic state-building failures in Iraq and Afghanistan, where budgets were unprecedentedly large, have proven to many the futility of the effort and the concept. It is true that the coalitions engaged in the recent conflicts in Iraq and Afghanistan failed to establish effective, legitimate states; and even before those were disappointing failures in Somalia and Haiti. The dismissal of state-building as a vital national security tool, however, is a strategic folly of historical proportions. None of these failures was inevitable. There are numerous examples of successful state-building efforts, including Jordan, South Korea, the Republic of China (Taiwan), Singapore, and Colombia (a successful case of state-rebuilding), among others.
Iraq and Afghanistan, however, are not examples of the futility of state-building. In neither case was there any strategic analysis of the requirements of state-building. Efforts were half-hearted, misguided, and ad hoc. Some successful initiatives were tried, but then, not sustained. But these failures were neither complete nor inevitable; better strategy, execution, and judgment would have made success far more likely. Iraq and Afghanistan are both, in fact, excellent examples of the costs of the failure to develop a coherent and realistic state-building plan. Today, they both exemplify the multi-tiered costs of state failure; humanitarian tragedy, regional destabilization, and feral cultivation of toxic ideologies and organizations.
Afghanistan is often dismissed as the poster child of why efforts at nation-building or state-building are futile. Yet for most of the 15 years since the 9/11 attacks and the Bonn Agreement, state-building and nation-building were explicitly rejected as goals and methods.
In the aftermath of 9/11, then President Bush and senior officers of his Administration explicitly rejected nation-building as an approach in Afghanistan. Instead, U.S. efforts had focused on a military mission to collapse the Taliban regime, by using U.S. airpower partnered with Northern Alliance militias to rout the Taliban from Kabul, and after that, from a base at Bagram outside Kabul, to defeat remaining Taliban forces in the countryside.
When a case was made to the State Department that, once the Taliban had fled, a critical issue would be the formation of a government that was acceptable to the Afghan public and could carry out minimal activities—not least holding the country and preventing a civil war between warlord factions—that team was encouraged to work from the UN to establish negotiations. But these objectives were never more than peripheral for the United States. Instead, the U.S. effort continued to focus on partnering with warlords to raise and maintain militias to accomplish near-term security objectives. Initially, the bulk of the remaining effort was given to humanitarian relief, through a network of UN agencies and nongovernmental organizations (NGOs) that operated in parallel to, and often at odds with, the Afghan state.
Preparations to establish a new government took place through the auspices of the UN. The experienced UN diplomat-statesman, Lakhdar Brahimi, who had also tried to broker a successor to Najibullah’s government in Afghanistan in the early 1990s, took the helm of the team. It quickly became clear that the Taliban would fall much more swiftly than initially anticipated. Then Secretary of State Colin Powell said he had three instructions for the team: “Speed, speed, speed.” But although the team mobilized quickly and raced against time to assemble a group of Afghans at the St. Petersburg castle in Bonn to conduct political negotiations to establish a new government, facts on the ground were already being created. By the time the UN team assembled in Bonn on November 20, 2001, the Northern Alliance, backed by U.S. air power, had advanced into Kabul and had seized the key ministries.
The group assembled around the table at Bonn consisted of four factions—the Northern Alliance, the Rome Group (a set of courtiers to the former king), and two other political groups operating in exile—the Cyprus Group and the Peshawar Group. Absent was representation from Afghanistan’s southern regions and the Taliban. Aware of the limited representation, the group, facilitated by Brahimi, agreed on a set of governing principles and a phased process for establishing legitimate institutions of government over time. It would start with an Interim Administration, then an Emergency Loya Jirga would be convened. This would be followed by a Transitional Administration. Next was to be the drafting of a constitution by a Constitutional Commission that would put this constitution to a Constitutional Loya Jirga for ratification, followed by elections after some three years.
This political process held to its timetable and worked remarkably smoothly. It was accompanied by an effort to build—or resuscitate—the organs and institutions of state that would make the high-level political promises a reality in the towns and villages across the country. Resuscitate, because contrary to many widely held assumptions of a “blank slate” or “tabula rasa,” there, in fact, remained in 2001, a remarkably resilient set of institutions and organizations of government across Afghanistan. A World Bank report found that there were “anywhere between 250,000 and 350,000” civil servants in place, in ministries and provincial and district offices across the country.19 While there had been considerable human capital flight from the country to neighboring countries and beyond, the backbone of the civil service had persisted from the time of the monarchy through the communist period, the civil war period, and the Taliban period. As in many other contexts, the initial international assistance effort did not stop to take stock of what was already in place, but rather, assumed there were no institutions—or at least none worth considering building on or preserving—and instead built up alternative parallel delivery structures.
Despite this, a nascent initiative to build on existing structures and invest in reestablishing Afghanistan’s state institutions began. It started with mapping out existing ministries and institutional and organizational arrangements, and reviving basic processes of governance and decisionmaking, ranging from paying civil servants their basic salaries on time, to establishing weekly cabinet meetings with agendas, establishing a civil service commission to sift through competing factional claims to positions in the civil service, and getting a basic budget agreed, funded, and operational. From there, it proceeded to take stock of ministries and their capacities, and to design a small number of priority initiatives and actions to build the public’s confidence, demonstrate tangible progress and lay the foundations for the long term. Examples of initiatives included:
- Convening of two Loya Jirga on the basis of district elections together with some selected delegates, and establishment of the Constitutional Commission.
- Changing the currency, to move from three separate currencies to a single currency, with new symbols of national unity. This was accomplished by using the Afghan hawala dealers and their networks, at a cost far lower than an alternative proposal by the UN system using UN personnel.
- Building the budget process as an instrument of policymaking, putting in place measures to raise revenues, creating a secure payroll system for government employees, and using a simple IT system to reconcile national accounts (across the countryside).
- Creating a trust fund for development partner contributions that would be heavily audited and guarantee high levels of accountability for contributions.
- Building out nationwide programs to reach all villages, including the National Solidarity Program (NSP), which gave block grants to 34,000 villages to manage themselves.
- Establishing the National Health Program, a consortium of the United States, EU, and World Bank that agreed on common standards for basic health care and contracted out service provision to each province to those with capacity to deliver—whether through NGOs or the private sector.
- Designing the transportation network infrastructure, including starting on the ring road, rail infrastructure, and power linkages that would lay the basis for Afghanistan as a connector, rather than a black hole in connectivity in the region.
Most of these initiatives had a few interesting characteristics that stand in contrast to the mainstream way of approaching aid and development today. First, the initiatives were prepared in such a way that they could be designed, led, and managed by Afghan leaders and managers, with either no or very minimal foreign expert technical assistance. Second, they involved a large degree of citizen consultation, co-design, and participation. Third, while they were intended to enhance the legitimacy of the state, this did not mean that the state had to implement them entirely through its bureaucracy. Rather, the initiatives allowed for rules of the game to be set and policies determined with full government leadership, but funding could flow to NGOs, citizen groups, or private sector entities depending on capability and appropriateness. Fourth, the programs themselves, if implemented using domestic capacity, would cost a fraction of the price tags we are familiar with today. For example, a school built through the NSP would cost $20,000, versus up to 10 times that amount if built by a foreign contractor. This is not to say that foreign contractors are not sometimes needed—for instance, in the design and building of large infrastructure projects—but that alternative systems for small infrastructure and social programs can be built at much lower costs and with much greater domestic participation.
Many of these initiatives survive until today and are lauded as the programs that work in the country, but they—and the program of legitimizing the state-citizen compact—did not become the dominant reality in the successive years. There were two major obstacles to this. First, the privileging of support to militias and strongmen over the reform movement, missing the point that short-term security might be able to be bought through the former method, but long-term stability comes through building trust with the 95 percent of the public who are the reform constituency. When the insurgency began in 2004 and lasted through 2006, a large driver was the loss of trust of the population in fairness, rule of law, and holding back predatory interests. Second, the operations of the aid system. While individuals usually had the best of intentions, the way the system as a whole operated served to fragment and fracture the Afghan state, paradoxically in the name of capacity building.
Sadly, failure to reach an agreement that would give some more space to the reform movement and bring into balance the operations of the other two spheres meant that the reform team lost hope and steam. Some key leaders convened around the “Cairo memo” in December of 2004, which predicted that the narco-mafia state would continue to consolidate its stranglehold on the state and instability would set in.20 At the time, minimal measures were needed to address this. The reformers left; legitimacy dwindled and the insurgency returned. As this became clear, the international community swung into action, finally—by 2006—taking heed of recommendations to bolster the legitimacy of the state. But it was too little—or perhaps too much—too late. The “Golden Hour” had given the Afghan public a tantalizing taste of what was possible.
But as the aid machine lumbered into gear for large-scale implementation, some key design lessons were lost. It veered from small footprint to massive footprint, and in the process, its tens of thousands of small and large projects overwhelmed and splintered the still fragile Afghan institutions, helping to import the culture and practice of corruption, as the availability of funding increased and the likelihood of consequences for corrupt activity diminished. The state became a parody of itself—creating the sovereignty paradox—and as its leaders increasingly claimed sovereignty, the trust of the population declined. The country was kept afloat—and its people employed—on the back of a large, and unsustainable, foreign presence.
As the international military and development presence began to draw rapidly down in 2014, the public came out to vote in huge numbers, for a reform message. This was a clarion cry from the Afghan public cutting across ethnic, conservative/modern, and pro- and anti-government lines, calling for legitimate institutions, an anticorruption agenda, and reform. It demanded a real chance for the “Afghanization” of the state-building project. The National Unity Government has taken on this challenge, with the huge impediments of a turbulent election and unconstitutional government arrangement, rapid international security drawdown, the economic shock following such a drawdown, an inherited condition of corrupt administration intertwined with criminal networks, and tough regional and security dynamics. Despite this, the public and the leaders have formed a constituency for reform and it is progressing—the results will be judged in years ahead.
Some key lessons can perhaps be drawn from this experience. First, that efforts to establish minimally viable institutions are the efforts of decades, not months nor years. Our benchmarks, yardsticks, and measurements must change accordingly. With the realism that most change happens over decades, rather than to the metronome beat of one-year log frames, comes the opportunity to sequence activities carefully over time. Perhaps a country will not reform its health system, its education system, and its military at the same time; leaders will tackle reforms in stages. But committing to a destination of a fair, rule-based government and state that strives to meet the expectations of the citizenry is a goal worth pursuing.
Second, much of what it takes to create institutions and organizations effectively is vastly cheaper than suggested by current methods of aid engagement. The price tags of tens of billions are not a fair reflection of real costings if implementation methods were redrawn and country systems used appropriately.
Third, to do so means drawing down our addiction to donor circuses of tens of thousands of individual projects that fragment the state, and investing more in policy and program design before project mayhem. If we are talking about coordination, it is too late. The designers of the Marshall Plan (which, in its original form, we are not advocating) considered six designs and picked three. They judged that the implementation of the three they rejected would have meant that Europe would never be reconstructed. They rejected the vehicle of thousands of small projects, managed from Washington and other capital cities, dependent on grant aid and micromanagement. Instead, they realized that unless the leaders of a country (and wider constituencies within that country) agreed on their own plans, implementation would not be assured. So they created a set of mechanisms that required country ownership of a plan, which would include a plan for enhancing domestic revenue so that the country would become self-reliant.
Fourth, to take this approach will require investment in the leaders and managers of the country and real engagement with its people. Investment—through scholarship programs, training, and mentorship. It will pay dividends. Imagine if South Sudan had, upon signing the Comprehensive Peace Agreement, set up a leadership academy and trained 10,000 of the next generation in management, leadership, accountancy, agriculture, oil management, and other key skill sets. This could have required a substantial sum to get right, but would have been a fraction of the hundreds of millions spent on importing technical assistance or providing security when the new nation lapsed back into conflict, for reasons mainly of poor governance—not to mention, the loss of life and human suffering. It will also take real engagement with the country’s public. Through civic participation, through two-way communications, through the construction of national narratives that allow for national identity to be built.
Fifth, there are strong grounds for hope and confidence, despite real reasons for caution and re-examination. It was possible for Afghan leaders to establish national (by which we mean countrywide rather than focused on the capital city) programs at reasonable cost that took institutional hold and delivered real things to real people. At the same time, many countries—including in the last two decades—have turned the corner from conflict, instability, and endemic corruption and made huge strides forward. There is no cookbook with a recipe for doing this, but there are certainly examples and lessons.
Sixth, the budget matters. Financial management seems technical and boring to many, but it is the glue that holds governance together, and is, in fact, intensely political. In many countries budget day is the political event of the year, as this is where real political trade-offs and fights occur. For an emerging nation, the key marker of success is increasing revenue generation, and allocating the budget to critical needs of the public, rather than relying on donor largesse.
Seventh, assuming (as many in the peacebuilding community have admitted) that the economy can wait—with politics and security as the only priorities—is a terrible mistake. The imperatives of security and safety, and the role of politics, are essential and undeniable. But on interviewing those who have brokered and implemented peace deals, and reflecting on shortcomings in ones that worked and why the others slipped back into conflict, all these diplomats realized that their biggest blind spot had been this assumption. First, young men did not have jobs, and took up arms again. Second, they promised all sorts of things in the peace agreement, that without the International Monetary Fund (IMF) and World Bank, and country’s budget behind, they could not pay for. Finally, if one does not explicitly set about to create the policy and institutional conditions for legitimate market activity, what you get is not spontaneous legitimate growth, but a deepening of the criminal and war economy, which tips the country back to war.
Lastly, in many countries, there is a reform constituency and a warlord constituency, often deeply intertwined with the war economy and criminal networks. Security built on alliances with the latter may be necessary and expedient, but enduring stability will come from building trust with the public through institutions that serve and protect the public interest. This may seem harder, but there are no shortcuts. The recent fascination with “elite deals” comes at the peril of ignoring a much bigger bargain with the public interest, expressed through a process that commits to a destination of a just society. We collectively ignore the real grievances of large segments of the public, mainly stemming from predatory government behavior. In Afghanistan, a civic leader said that in his country, there are 4 percent thugs, 1 percent extremists, and 95 percent ordinary people. He said, the problem with the foreigners is that they tend to cut deals between the 4 percent and the 1 percent, over the heads of the 95 percent. These last for a while, but the population suffers so much, that they are eventually driven to extreme measures to protect life and limb. And so, insecurity grows. To reverse the trajectory, no one is asking for Switzerland overnight. Just that year on year, there be a little less predatory government, and a little more fairness and public value.
In this section we describe four critical errors over four distinct phases that led to catastrophic state failure in Iraq between 2003 and 2009. The failure of the state-building effort in Iraq predates the 2003 U.S. invasion of Iraq itself. In a presidential debate with then Vice President Al Gore in October 2000, then Governor George W. Bush stated, “I don’t think our troops ought to be used for what’s called nation-building…. I mean, are we going to have some kind of nation-building corps from America? Absolutely not.”21 His future National Security Advisor and Secretary of State Condoleezza Rice was equally skeptical of state- or nation-building, as indicated in her pre-election warning against permitting American armed forces to become “the world’s ‘911.’”22 In an interview with The New York Times she famously opined, “We don’t need to have the 82nd Airborne escorting kids to kindergarten.”23 These strong statements reflected unequivocal views that were widely shared among then President Bush’s national security team. This skepticism constituted the starting disposition of the Bush Administration entering into office in 2001, and boded poorly for any serious consideration of state- or nation-building as an element of U.S. national security policy or practice.
The al-Qaeda attack on the United States on September 11, 2001, was a traumatic shock for then President Bush and his national security team. As discussed above, the initial impetus in Afghanistan was one of revenge and retaliation against the Taliban regime for its continuing hospitality to al-Qaeda. When, in early 2003, the Bush Administration turned its sights on Iraq, some were surprised; the war in Afghanistan was ongoing and no direct link between al-Qaeda, the Taliban, and the Saddam Hussein regime had been identified.24 Other senior Bush Administration leaders, however, had been advocating an invasion of Iraq even before taking office in 2001.25 Among the most aggressive in advocating regime change in Iraq were then Secretary of Defense Donald Rumsfeld and his then Deputy Paul Wolfowitz, veterans of the President George H.W. Bush Administration (1989-1993), who experienced the frustration of Saddam’s survival after his universally condemned invasion of Kuwait precipitating the first Gulf War.26
The assumption among Bush Administration leaders was that the invasion of Iraq and subsequent stand-up of a new government would be rapid and inexpensive. Initial estimates of the anticipated length of the U.S. commitment in post-Saddam Iraq were naively optimistic. Rumsfeld predicted the war would last, “Five days or five weeks or five months, but it certainly isn’t going to last any longer than that.”27 Then Vice President Cheney, an avid advocate of regime change in Iraq, told Meet the Press, “My belief is we will, in fact, be greeted as liberators…. I think it will go relatively quickly…[in] weeks rather than months.”28
Bush’s leading national security officials were confident the costs of installing and consolidating a new regime in Iraq would be underwritten by Iraqi oil revenues. According to Wolfowitz, “There’s a lot of money to pay for this. It doesn’t have to be U.S. taxpayer money. And it starts with the assets of the Iraqi people.... We are dealing with a country that can really finance its own reconstruction and relatively soon.”29 Cheney was again naïve, confidently stating, “every analysis said this war itself would cost about $80 billion, recovery of Baghdad, perhaps of Iraq, about $10 billion per year. We should expect as American citizens that this would cost at least $100 billion for a two-year involvement.”30
It was assumed that following a rapid and decisive military victory over Saddam’s armed forces, the functions of transitional governance would be turned over to prominent, returning Iraqi exiles, like Ahmed Chalabi. Believing that the Iraqi population, grateful for their liberation from the tyrannical Saddam regime, would fall in line under the returning expatriates, “prewar planning for postwar governance following Saddam Hussein’s fall concentrated on the role of key Iraqi exiles and Kurds in the north to assume control of the government’s infrastructure and pave the way for democratic transition.”31
In view of the Bush Administration’s intuitive hostility to state-building and the expectation of a rapid victory and transition to a self-paying Iraqi transitional regime run by returning Iraqi exiles, it should come as no surprise that there was no U.S. plan for postwar Iraq. In a 2006 interview with the Newport News Daily Press, Brigadier General Mark Scheid, an Iraq war planner, confided that Rumsfeld said, “he would fire the next person” who talked about the need for a postwar plan. Rumsfeld’s instruction to the planning team was, “everything we write in our plan has to be the idea that we are going to go in, we’re going to take out the regime, and then we’re going to leave…. We won’t stay.”32 Ultimately, there was no cogent “Phase IV” plan for postinvasion operations including security, stability, and reconstruction. Though civilian agencies had voiced concern and developed concepts for postinvasion governance in Iraq, both independently and under the leadership of the National Security Council, no consolidated or integrated state-building plan emerged.33
With euphemistically speaking lukewarm support from the international community, the United States invaded Iraq in the spring of 2003, with no coherent or realistic plan for addressing postinvasion governance. This was the first critical error leading to the failure of history’s costliest effort at state-building. The absence of a plan left forces on the ground without a roadmap for organizing the postconflict state, compelling them to improvise blindly based on limited understanding of Iraq, while deconflicting competing plans and activities implemented by multiple U.S. government agencies and fighting a lethal and growing insurgency.
The initial effort was headed by the ill-starred Office of Reconstruction and Humanitarian Assistance (ORHA), led by retired Lieutenant General Jay Garner. To the extent that ORHA had any concept or plan for governance it was based on Ministerial Advisory Teams consisting of U.S. officials, returned Iraqi expatriates, and “the last Iraqi standing,” referring to those Iraqi ministerial officials remaining after the purging of top Saddam loyalists. These leadership teams were to sustain the civil functions of the Saddam state apparatus with the support of the presumed vast civil service bureaucracy intact. Often operating at cross-purposes with the U.S. armed forces, obstructed by bureaucratic resistance from the U.S. civilian agencies, and besieged by a deteriorating security environment and ongoing combat operations, ORHA’s planning for the future Iraqi state was marginal. To be clear, there was no serious thought or plan for building the Iraqi state after the toppling of Saddam Hussein for a simple reason. Then President George W. Bush and his national security leadership envisioned a swift military demonstration of shock and awe, and an early exit from Iraq. The Iraqi state would simply restore itself.
In May 2003, ORHA was replaced by the Coalition Provisional Authority (CPA), with Ambassador L. Paul Bremer appointed as the Presidential Envoy to Iraq. Paradoxically, while the Bush Administration senior national security leadership essentially abdicated responsibility for postinvasion Iraqi governance, the CPA disbanded the most robust and capable Iraqi national institutions. According to Bremer, “At liberation, there was no political structure on which to build a new Iraqi system.”34 Bremer and the CPA must accept some of the responsibility for this situation, having on May 16, 2003 issued CPA Orders Number 1 and 2, disbanding the Ba’ath Party and all Iraqi military entities respectively.35
The contention that the Iraqi Army and other organizations of state had already “self-demobilized” is disingenuous; as a result of unanticipated circumstances as well as its own actions, the CPA decided to begin a state-building process in Iraq from scratch. The second critical error condemning state-building efforts in Iraq to failure was the decision to dismantle or disregard the existing Iraqi institutional inventory—the “muscle memory” of the country. It might have been understandable to believe the demented survivors of Saddam’s “republic of fear,” having endured decades of pathological totalitarianism would be incapable of carrying out the functions of the new Iraqi state.36 However, unrooted in the organic plasma of Iraqi life, the returned exiles were not able to reanimate governance at any level. One inadvertent consequence of the de-Ba’athification policy was an estimated 15,000 to 30,000 unemployed civil servants, in addition to creating a shortage of qualified workers to run the government. As the occupying authority the United States and its creature the CPA left themselves few alternatives to taking the responsibilities of governance upon themselves.
In the context of a continually deteriorating security environment Bremer plowed forward, outlining a seven-step blueprint for restoring full Iraqi sovereignty, in which he argued, “the process is straightforward and realistic,” and that “knowing how to turn Iraq into a sovereign state” is not a problem.37 This blueprint was developed with minimal Iraqi participation. Having arrived in Baghdad with no plan at all for postwar governance, the United States then attempted to impose a hastily conceived plan onto a sullen Iraqi population. The exclusion of the Iraqi governing elite and reliance on expatriates, coupled with complete opacity of the transitional process to ordinary Iraqis led predictably to popular alienation from the CPA and its transitional project. In other words, planning, design, and leadership for the new Iraqi state were provided by non-Iraqis. The literatures of development, management, and other disciplines are unanimous in their acknowledgement of “buy-in” and “ownership” as central to any kind of transition or transformation effort.38 A process based on substitution and replacement as opposed to buy-in and ownership was bound to fail, and was the third critical error in the state-building process in Iraq.
The Bush Administration’s misplaced hope in the leadership capacity and legitimacy of Iraqi expatriates was ultimately replaced by faith that a group of Iraqi leaders selected through an American designed and managed process would assume the burdens of governance. But how would the right individuals be identified? Bremer argued that “Elections are the obvious solution to restoring sovereignty to the Iraqi people. But at the present elections are simply not possible. There are no election rolls, no election law, no political parties law and no electoral districts.”39 Bremer was under pressure from Washington, however, to accomplish a transition to full Iraqi sovereignty before the November 2004 U.S. presidential election, as the war was already beginning to take a heavy toll on the American public, and thus, on the White House. And Grand Ayatollah Ali Sistani, the Iraqi Shia spiritual leader, insisted that popular elections should be held at all levels of government, from the national to the municipal.40
By summer of 2003, the state-building effort in Iraq had become consumed by the search for legitimate Iraqi leadership to assume the responsibilities of future governance. Neither appointed proconsuls nor U.S.-selected Iraqis could meet the legitimacy test among a restive and divided population. A feverish effort ensued to find the right balance between Sunni and Shia, Kurd and Arab, and expatriates and those who remained under the Saddam regime. The ill-fated 25-member Iraqi Governing Council (IGC), announced by the CPA on July 13, 2003, suffered from a lack of credibility from the beginning, due in no small part to the fact that its members were appointed by the CPA. By June 2004, the IGC was replaced by the Iraqi Interim Government (IIG), another CPA-appointed body responsible for the taking the transfer of governing authority from the CPA. Thus ended the first of four phases of Coalition engagement in Iraq.
Phase two was characterized by a panicked, massive response that included the establishment of the Iraq Reconstruction Management Office (IRMO), the Project and Contracting Office (PCO), Provincial Reconstruction Teams (PRT), and a proliferation of projects under the Commander’s Emergency Response Program (CERP). These programs and offices injected over $20 billion into a shattered country in a short period, guided still by neither strategy nor master plan. The IRMO and PCO were the reconstruction management units of the State and Defense Departments respectively, established to assume the responsibilities formerly managed by the CPA. PRTs constituted an innovative U.S. military initiative to combine both “hard” and “soft” power in extending the writ of government by empowering local governance. Begun originally in Afghanistan in 2003, 10 PRTs were established in Iraq in November 2005.41 The CERP funds were used at the discretion of military commanders as “walking around money,” for the purpose of winning support for reconstruction and stabilization from the local Iraqi population. Feverish “catch-up” project saturation overwhelmed Iraqi capacity.
Beginning in 2004, a debate raged between those insisting on immediate election of Iraqi officials, and those who argued the requisite security conditions and resources (e.g., census, voter rolls) were not yet conducive to free and fair elections. Though marred by low Sunni turnout, a national parliamentary election was finally held in January 2005. In the meantime, however, the security environment throughout the country was deteriorating dramatically. “Beginning with the rampant looting and violence throughout the country following the fall of Baghdad on April 9 (2003), coalition forces lost time and the trust of the population by failing to control the security environment.”42 The initial neglect was due to the coalition military forces assuming that policing and maintaining order were not their responsibilities. “At no point do we really see becoming a police force,” Brigadier General Vincent Brooks stated at a briefing in Qatar on April 11, 2003.43 Policing and stabilization operations have always been orphans in the U.S. foreign affairs apparatus, and absent a plan for postinvasion Iraq, little thought was given to what might happen once the regime was overthrown. In response to questions about the collapse of the rule of law Rumsfeld famously responded, “Stuff happens.”44 The stuff that happened in Iraq is that 2.5 percent of the Iraqi population died as a result of the conflict between March 2003 and June 2006—nearly 655,000 casualties.45
Although states historically have been formed in the crucible of conflict, this level of lethality is obviously nonconducive to the many tasks necessary to form and consolidate the institutions of an effective and legitimate state. The fourth critical error condemning the postinvasion state-building effort in Iraq to failure, was the failure to understand the critical importance of establishing a much higher degree of security from the outset. To accomplish that may have required U.S. military units taking on police or paramilitary functions (such as gendarmerie or carabinieri in Europe), but dismissing the growing insecurity as an inevitable collateral cost and as a manifestation of the stabilization process was in retrospect woefully misguided. The premature focus on establishing mechanisms for the play of competitive politics at the expense of establishing the rule of law handicapped the state-building effort from the beginning. Phase two eventually ended with a premature troop drawdown that derailed whatever progress had been achieved in the reconstruction effort.
By 2007, the security situation had deteriorated to such a degree that, against the advice of many of his confidantes and widespread popular opposition, then President Bush decided to change course in Iraq, and double down with a new strategy. The third phase of the U.S. engagement in post-Saddam Iraq was built on a new strategy embodied in a troop surge, a leadership change, and a population-centric approach to counterinsurgency. “The biggest of the big ideas that guided the strategy during the surge was explicit recognition that the most important terrain in the campaign in Iraq was the human terrain—the people….”46 The so-called “surge” sent more than 20,000 additional U.S. troops, primarily to Baghdad and Anbar Province.47 Military command of the Iraq War was given to General David Petraeus, while the political and diplomatic lead was given to seasoned diplomat Ryan Crocker. Despite initial skepticism, in retrospect, phase three—with the new leadership, troop surge, and counterinsurgency strategy—was successful in reducing violence in Iraq and in making progress in building the Iraqi state possible.48
Phase three, from 2007 to 2009, offered a brief window during which the alignment of politics and reconstruction advanced the project of building the Iraqi state. Belatedly a significant effort to help rebuild the atrophied Iraqi judicial system included “construction of judicial facilities, training of judicial security elements, and support for reestablishment of judicial systems and structures.”49 New laws on elections, provincial powers, and de-Ba’athification were passed. A measure of hope was restored, however all too abruptly snuffed out with the November 2008 passage by the Iraqi Parliament of the Status of Forces Agreement (SOFA) requiring all U.S. forces to depart Iraq by December 31, 2011.50
Phase four, initiated by the SOFA, consisted of the consequent troop drawdown under newly inaugurated President Barack Obama, the dismantling of the PRTs, and the end of the CERP. This was consistent with the new president’s campaign commitment to “ending this war.”51 The agreement, negotiated during the second George W. Bush Administration and implemented by President Obama, aborted the progress made during phase three and left the reconstruction and state-building project undone.
Ultimately, in view of the disregard for the centrality of state-building to the mitigation of conflict and stabilization of states and regions that the campaigns in Afghanistan and Iraq demonstrated, it is unjustifiable to conclude that these represent failed efforts. State-building was, in both cases, an inconvenient afterthought, never seriously embraced by the leading powers in either theater. Profoundly misguided abdication of responsibility for planning state-building efforts—rooted in an ideological predisposition against state-building within the George W. Bush Administration—and an ongoing series of execution and judgment errors are to blame for these epic failures, not the concept of state-building.
According to Charles Tilly, “War wove the European network of national states, and preparation for war created the internal structures of states within it.” In other words, states emerge from the crucible of conflict, which hardens and aligns their geographical borders where they can be defended effectively from external adversaries, while exercising sovereignty over their subjects and interior territory.52 Maintaining the security forces necessary to defend large territories (and to maintain internal law and order) is a substantial undertaking requiring commensurate appropriation of national financial and human resources. Large governmental organizations are established to provide for national defense and to maintain law and order—public goods which citizens are historically willing to underwrite. States are built around these institutions and processes.
The 20th century shows us there are many paths to statehood, though conflict and war—either external or internal—remain powerful shaping forces. Consider the cases of Jordan, Singapore, and Colombia—each unique in its path to consolidated statehood, and none without some degree of conflict. Each is worthy of brief examination for indications of what institutions, processes, and attributes external agents (e.g., the United States or the international community) may wish to foster.
Liberated from the Turks following their defeat in World War I, Arab leaders struggled to establish the integrated Arab homeland to which they aspired, but were ultimately frustrated by the strategic requirements of the victorious European powers. France and Britain competed for territory and control in the Levant, dividing it into mandates defined at the 1920 Conference at San Remo, Italy, based on the notorious Sykes-Picot Agreement.53 Palestine and Iraq were accorded to Britain, and Syria and Lebanon to France. With no history as an autonomous polity, Jordan rose from the ashes of the Ottoman Empire as part of the Palestine Mandate. Governed for centuries by the Ottomans within the Damascus vilayet, the land that became Jordan was dismissed as “the vacant lot which the British christened the Amirate of Transjordan.”54 With no capital city nor population center, mandatory governance was managed by the British High Commissioner in Palestine, with Abdullah bin Al-Hussein, a descendant of the Prophet and key leader of the Arab Revolt, designated as the emir under British supervision. Britain provided an annual subsidy of £٢ million.
Basic governance functions in Transjordan were assumed by British administrators posted to the mandate, who established an army and police force, organized finances and border demarcation, and established civilian governmental ministries.55 The British recognized Transjordan as an independent government in 1923, but maintained control over financial, military, and foreign policy matters. In 1925, the mandatory territory of Transjordan was determined by the Permanent Court of International Justice and the International Court of Arbitration to be one of the newly created successor states to the Ottoman Empire. The state was further consolidated in 1928, by the enactment of an “Organic Law” and a “Nationality Law,” “authorizing the new state in its territorial and temporal claims and in its control of the bodies over which it rules.”56 It was only in 1946 that the Hashemite Kingdom of Jordan was internationally recognized as a fully sovereign state, and only in 1955 as a full member of the UN.
Britain’s first priority in the mandatory territory was to establish stability and order among the Bedouin tribes. At its inception, Jordan (then Transjordan) became a de facto British protectorate, with Britain guaranteeing both internal and external security. In 1920, the 100-strong “Mobile Force” was formed under the command of Captain F.G. Peake. By 1923, the need for a larger force led to the formation of the “Arab Legion,” financed by Britain and under the command of British officers. Even as Britain incrementally granted sovereignty to Jordan, it retained this role in the Jordanian military until Sir John Glubb was relieved of command in 1956. The Arab Legion served as the Jordanian army, imposing both internal order in the realm, and defending it against incursions by Wahhabi tribesmen from the Najd (in what is today Saudi Arabia).
While the British provided security, forging a new national identity—in a land that never previously possessed one—fell to the Hashemites. These direct descendants of the Prophet, “placed Jordan into the continuum of Arab history, and then as a logical extension of it…the personal bridge connecting” the traditional narrative of Arab unity and the emerging reality of divided Arab states.57 The establishment of borders, central administration, and nationality defined the new state. Membership in the Arab Legion was a critical unifying factor bringing together members of the many tribes that made up the Bedouin population. Emir Abdullah, not being native to the land of Transjordan, was able to convene and lead as a neutral arbiter over rival and often feuding chiefs. With no residual attachment to Ottoman identity nor competing political loyalty to local, familial, or tribal affiliations, and juxtaposed against colonial suzerainty, Abdullah was able to create a narrative of Jordanian citizenship, based on his own lineage, the army, and his deft balancing of local politics against external threats. As the nascent state evolved key government institutions “promoted the creation of a supra-tribal national culture that furthered Transjordanians’ national identifications and instilled in them a sense of patriotism.”58
There are important insights to draw from the consolidation of the state that is today Jordan, which are worthy of consideration even if not directly applicable or replicable in the 21st century. First, Jordanian sovereignty was achieved in stages, under the watchful eye of British senior management. The process of gaining full sovereignty took about 25 years—approximately one generation. Even today Jordan remains dependent on external financial support from the donor community, and security support from its allies, including the United States and Great Britain.59 The principle of gradual or incremental sovereignty may no longer be politically acceptable, though Timor-Leste and Kosovo incubated under UN and North Atlantic Treaty Organization (NATO) protection for years. The process of state formation, however, must be allowed to unfold over whatever period of time is necessary to realize essential sovereign state functions (effective governance of territory and policing of borders). Premature sovereignty renders unready states vulnerable to the fissile dynamics of internal politics, the predations of external adversaries, and the corrosion of illicit networks and corrupt insiders. Permitting sovereignty to progress incrementally provides opportunities for local learning, institutional development, management and mitigation of internal conflicts, and gradual socialization of policies. The 25-year process of Jordanian sovereignty is not necessarily a guideline, but the generational nature of state consolidation must be acknowledged along with the bilateral or multilateral partnerships necessary to cultivate such a process. Any international actor, be it the United States, the UN, or an ad hoc coalition of the willing that wants to help build a state, must be in it for the long haul, with modest expectations for short-term benchmarks and no delusions about succeeding on the cheap.
Avoidance of conflict for at least five years following the cessation of hostilities is associated with dramatically greater likelihood of long-term security. It follows that taking the steps necessary to avoid conflict during the initial five-year window must be a high priority. The British, through the Mobile Force and then the Arab Legion, established and enforced security in Transjordan, suppressing internal fighting and external aggression. Like Jordan, even today in the midst of regional conflagration, both Kosovo and Timor-Leste are relatively peaceful, both having consolidated under the protective security shield of international partners. These cases, in addition to the cases of Afghanistan and Iraq, where international powers failed to achieve security and stabilization in the immediate postconflict period, suggest that security and stability must be a paramount priority. Without security and stability, no progress or consolidation is possible.
The story of Singapore’s journey from a corruption-plagued, politically unstable, resource-poor city-state populated largely by indentured Chinese and Indian laborers, to a top global economic power and standard-bearer for good governance less than four decades later provides several critical lessons in successful state-building. The East Asian island nation’s remarkable transformation has been well-documented, not least by the man responsible for leading it “from third world to first,” as he said himself—Singapore’s founding father and first prime minister, Lee Kuan Yew.60
Singapore’s achievement is even more impressive when you consider that it was unceremoniously asked to leave the Federation of Malaya in 1965, just two years after being accepted, because its prospects for independence were considered so unviable that it would later accept much humbler terms of membership.61 Even Lee admitted, “We faced tremendous odds with an improbable chance of survival.”62
Today, Singapore has the 10th highest gross domestic product (GDP) per capita in the world and stands as such an exemplary case of state turnaround and market building that countries around the world look to it regularly to provide examples and guidance for their own transformations.63 Its success is widely acknowledged to be the result of effective policy planning, implementation, evaluation, and adaptation. Early on, its leaders made a series of bold decisions about what kind of country they wanted Singapore to become, and then systematically implemented policies designed to turn that vision into reality.
In 1965, unemployment in the newly independent country was at 14 percent and rising and its only major economic asset was its location alongside a high-volume shipping lane, which enabled it to act as a port between major international markets.64 Understanding that ample employment opportunities were necessary for political and social stability, the government embraced an economic policy that prioritized substantially increasing sustainable job opportunities.65
This goal was part and parcel of Lee’s central plan to bring Singapore to self-sufficiency, which led him to reject the familiar model of long-term foreign aid in favor of short-term aid to be used in service of the development of new industries. His philosophy is captured in his declaration, “The world does not owe us a living. We cannot live by the begging bowl.”66
Within that self-sufficiency strategy were two unique decisions that guided Singapore’s development. First, the government chose to vault over the prescribed stages of national economic planning theories of the time and instead look for opportunities to connect to the international economy via investment from multinational corporations (MNCs) that could bring skills-training and jobs to its workers.67
This decision was supported by a second transformational decision: rather than become another third-world factory in the region for Western goods, Singapore would make itself into a first-world oasis of service standards.
Before Singapore could attract MNCs, it needed an unimpeachable system of governance. The first plank of the economic policy’s implementation strategy was a series of efforts to systematically curb corruption and institute integrity at the uppermost levels of what was then a very dishonest system. A former cabinet minister described how Lee, in his early days in office, would walk into offices and run his finger along bookshelves to check for dust, signaling his attention to detail. A parking attendant was fired for taking a bribe worth less than a dollar.68 These new standards were gradually expanded throughout government over the following decades, and by 2013, Singapore was ranked fifth out of 179 countries in Transparency International’s Corruption Perception Index.69
Once it had established a system for tackling corruption, the government implemented a highly successful combination of policies to attract foreign investment. These included: building transport infrastructure, creating industrial estates, offering equity participation in national industries, granting fiscal incentives (such as a 10-year tax-free status for investors) to promote exports or stabilize labor relations, implementing sound macroeconomic policies, engaging leading international businessmen, cultivating an investment compatible image through grooming public spaces, raising professional standards in the services industry, and generating positive publicity by sending officials to attend foundation-laying ceremonies and official openings of factories.70 These combined efforts contributed towards a favorable investment climate with low transaction costs, low barriers to entry, and low risk of government or labor disruption to business operations.71
Singapore’s first leaders were as bold in their approach to governance and public sector management as they were in their economic vision. On the eve of independence, they began an experiment in self-governance, with only the country’s limited experience under the British colonial system and the Malaysian Federation as preparation. The People’s Action Party (PAP) was Singapore’s ruling party within the Malaysian Federation, and it won a vast majority of parliamentary seats in the first independent government. The PAP’s top leadership was composed of a cadre of highly educated and technically capable professionals, with Lee at the helm. His team shared a strong work ethic, a common vision, and a mutual trust built through the shared experiences of World War II and the Japanese occupation, the end of colonialism, and the integration with and then secession from the Malayan Federation.72
That competent original team actively cultivated future leaders from the earliest days of self-governance. The government institutionalized several processes, including: efficient administrative procedures and processes for long-term planning; coordination between public, private, and civil actors; solicitation of input from stakeholders; implementation; monitoring; feedback flows; evaluation and revision of policy; creative adaptation of positive policy examples around the world; learning; and innovation.73
The administration avoided classic bureaucratic pitfalls by establishing clear purpose and principles that endure today: integrity, meritocracy, pragmatic orientation toward results, efficiency tempered by social equality goals, and socially inclusive stability.74
In 2004, then Prime Minister Lee Hsieng Loong articulated four principles of governance: leadership with vision, moral courage, and integrity; constant re-examination of old ideas and openness to new ones; self-reliance and individual responsibility, tempered with the provision of some social safety nets; and an inclusive society where citizens feel a sense of ownership and belonging.75 In addition, Singapore has designated five National Values: nation before community and society above self, family as the basic unit of society, community support and respect for the individual, consensus above conflict, and racial and religious harmony.76
The established purposes of Singapore’s government are: development of human capital as the country’s main resource, encouragement of self-reliance and financial prudence, maintenance of stability to attract foreign direct investment (FDI) and international talent, fostering sustainable economic growth, achieving and maintaining global relevance, prioritization of long-term sustainability over short-term gain, and supporting proactive government intervention to improve the public welfare.77
The Singaporean public sector ranks well internationally and competes successfully against members of the private sector for the Singapore Quality Awards, which recognize outstanding achievement in organizational management.78 Singaporeans enjoy high educational attainment and living standards in a country free of external debt, where government expenditure generally ranges between 14 and 18 percent of GDP.79
A strong testimony to Singapore’s success in governance and public administration is its management of the nation’s many transitions, including economic transitions, as described above, as well as transitions associated with social, political, international, and technological changes. For example, in 1989, Singapore launched TradeNet, the first nationwide electronic data interchange system in the world. Rather than simply digitizing existing processes, the Singaporean Trade Development Board used TradeNet to transform its organizational structure and business processes, network, and scope, resulting in productivity and competitiveness gains in both the public and private sectors.80
In 1995, the Singaporean government introduced the “Public Service for the 21st Century” initiative, in order to mold the public service into a body capable of undertaking “change as a permanent state,” by optimizing each employee’s potential, improving bureaucratic processes, building from coordinated action to a coordinated vision, empowerment of the ministries through decentralization of decisionmaking powers, and emphasis on superior leadership to counteract the public sector’s lack of market competition. This program aimed to change organizational culture and processes by targeting employee well-being, continuous learning, high-quality customer service, and organizational reviews for the purpose of integrating new technologies, reducing inefficiencies, and enhancing innovation.81
The government’s continuing investment in long-term planning and visioning is epitomized by the Centre for Strategic Futures (CSF), established in 2009, as part of the Strategic Policy Office within the Prime Minister’s Office. The CSF develops tools and methodologies to promote strategic thinking and risk management throughout government, and develops collaborative networks between government agencies, international partners, and academic organizations.82 The CSF hosted the 2013 Conference on Foresight and Public Policy, which identified four key issues in Singaporean strategic planning: the future of growth; the middle class; cities; and relations between citizens, corporations, and government.83
Singapore’s experience has its critics and its limitations. Many point to its less than democratic practices, and Singapore itself in recent years has been looking at its model for citizen participation and enfranchisement. Some point to the issue of scale and argue that as Singapore is a city-state rather than a large country, its applicability may be limited. Still, countries across Asia and Africa continue to turn to Singapore to share its lessons and expertise for country and city transformations.
Unlike Jordan or Singapore, which had minimal experience of self-government prior to the 20th century, Colombia has been a sovereign republic since the early 19th century. Political violence, however, has been constant throughout Colombia’s history. The period 1948 through 1958, known historically as la violencia, was particularly wrenching, characterized by political assassinations, riots, uprisings, and extreme cruelty. As many as 200,000 Colombians lost their lives. The entente in the 1960s between the feuding Liberal and Conservative parties—representing different interests within the elite classes—that ended la violencia constituted a short-lived experiment in coalition government and conflict mitigation, but failed to protect the interests of the rural poor. The formation of rural-based and Marxist-influenced guerrilla movements, notably the National Liberation Army (ELN) in 1964, and the Revolutionary Armed Forces of Colombia (FARC) in 1966, marked the inauguration of another protracted period of political violence in Colombia.
The ELN and FARC insurgencies were primarily fought in Colombia’s rural areas; however, during the 1980s, the war came to the cities. A new guerilla organization, “M-19,” rose from the electoral controversy of 1970 and committed spectacular acts of terrorism, including the 1985 siege of the Ministry of Justice, resulting in over 100 casualties, and among the dead were 11 of the 21 justices of Colombia’s Supreme Court. The growing cocaine trade added a further dimension of lawlessness, as the Medellín and Cali cartels both fought against, and corrupted from within, the Colombian state, armed forces, and police. By the late 1990s, many considered Colombia to be on the brink of state failure.84
In June 2016, the final details of a peace agreement that will end 50 years of violent conflict in Colombia were being negotiated.85 The peace that is agreed could unravel—there remain significant challenges. However, the country “has been converging fast towards higher living standards since the early 2000’s. Sound macroeconomic policy reforms—the adoption of an inflation targeting regime, a flexible exchange rate, a structural fiscal rule and solid financial regulation—have underpinned growth and reduced macroeconomic volatility.”86 From nationwide lawlessness in the late 1990s, when insurgent groups controlled as much as 40 to 50 percent of Colombian territory, the state has recaptured control, with “only six percent of municipalities…affected by terrorism by the end of 2014.”87 How did this dramatic reversal of national fortunes take place, and how was Colombia rescued from failure?
Some have argued that by the late 1990s, the Colombian people were simply fed up with conflict and insecurity, and demanded stabilization and peace. One seasoned observer wrote, “By 1999, Colombians had reached a collective conclusion that, if the deteriorating conditions remained unchecked, the viability of the nation was in question.”88 Perhaps it is true, but history indicates that popular will, though necessary, is not sufficient to either catalyze change or guarantee stability and peace. The formula for dramatic change in Colombia rested on four variables: determined political leadership, recapturing the military initiative, restoring law and order, and investing in social programs in marginalized areas both rural and urban.
Following the failed efforts of then President Andres Pastrana to negotiate an enduring peace with FARC and ELN (1998 to 2002), attitudes among the Colombian political leadership hardened. Pastrana initiated Plan Colombia with the support of the United States. An ambitious, long-term, and multifaceted plan for the recovery of sovereignty throughout Colombia, “At the core of the state-building, of course, was the modernization/professionalization of the armed forces and police to gradually construct peace.”89 Though the Plan had significant economic and other components, Pastrana’s successor, then President Alvaro Uribe, took office in 2002 “…with a platform based on establishing security,” stating during his campaign, “If I am elected I will fight day and night, every minute during 24 hours a day, to restore security….”90 Recognizing the complexity of Colombia’s generations-old internal conflict and the socioeconomic root causes of the conflict, Uribe argued, “Of course we need to eliminate social injustice in Colombia but what is first? Peace. Without peace, there is no investment. Without investment, there are no fiscal resources for the government to invest in the welfare of the people.”91 Uribe ushered in a period of full commitment to defeating the insurgencies as well as the narco-traffickers, and held to that commitment for eight years. He imposed a national wealth tax to raise funds to complement foreign assistance.
From 2000 to 2010, the combined Colombian armed services’ active and reserve components grew from 213,700 to 347,120, an increase of 62 percent. The defense budget trebled from $2 billion to $6.18 billion.92 Over the 15 years, from 2000 to 2015, with substantial support from the United States and other allies, through high-value targeting and the deployment of joint task forces, among other initiatives and innovative tactics, the armed forces were able to reclaim lost territory, and degrade the FARC from over 20,000 to fewer than 8,000 combatants.
By circumscribing military operations and shrinking the space outside of the government’s monopoly on the legitimate use of force, the Colombian authorities were able to focus on restoring law and order throughout the country. By 2007, all of Colombia’s 1,099 municipalities had a formal state presence, up from 930 in 2002. The effort has been classic “clear, hold, build.” According to Robert Killibrew:
First, the military pushes the FARC out of a geographical space. Close behind the troops comes the National Police, who have evolved into a quasi-paramilitary force acting under the rule of law to secure the gains the military has just made, and courts to hear complaints—and the cops and the legal system stay permanently. Third, and with the cops and judges, comes economic assistance in the form of food grants, the making of truck farms, larger grants in in-kind assistance for economic development, electricity, email connectivity, roads, schools and all the trappings of good government.93
In 2002, when Uribe took office, the homicide rate was 69 per 100,000 annually, with over 28,000 homicides.94 By 2013, the rate had dropped to 32 per 100,000.95 In the same year, 132 Colombian fugitives were extradited to the United States. Kidnappings dropped from nearly 3,000 in 2000 to 288 in 2014. Increasing stability led to dramatic increases in economic growth with GDP increasing from approximately $100 billion in 2000 to $337.7 billion in 2014.96
Also critical to revitalizing the Colombian state was the emergence of a new narrative which cast the FARC as narco-traffickers, terrorists, and enemies of the state. This new narrative was reflected in Uribe’s “Democratic Security Policy,” emphasizing citizen rights, security, and the rule of law, which effectively discredited FARC claims to be the defender of people. Uribe’s plan was partly underwritten by U.S. assistance, but also by over $1 billion raised through a “war tax” imposed on the wealthiest Colombians—that it was paid is an indicator of the success of the new narrative.97 FARC’s widely perceived failure to negotiate in good faith and use of the demilitarized zones granted to them by Pastrana as safe havens for regrouping and rearming further eroded their credibility. “The failed negotiations severely disillusioned the Colombian public and generated widespread support for adopting a hardline approach to security.…”98 The new narrative was helped by the dramatic terrorist attacks on the United States on September 11, 2001, which drew global attention to the brutality and inhumanity of terrorism. In 2004, the FARC was publicly condemned as a terrorist group by the UN High Commissioner for Human Rights.
With the FARC’s military project at a dead end and the political space both within Colombia and globally contracting, in 2012, the FARC returned to the negotiating table a severely weakened force. Though the future, long-term success of Colombia’s recovery over the last 15 years cannot be assumed, the country has made remarkable progress in rebuilding vital state security institutions, reinvigorating its economy, and creating a new social contract that has engaged the vast majority of Colombians. The Colombia-U.S. partnership has been a critical element in this historical process, and demonstrates that external support for state-building efforts, provided the political will and determined leadership is present on both sides, is not futile.
There is much we do not know about state-building. We do know from considerable experience that state-building is an arduous, labor-intensive, and time-consuming task. There is extensive literature on the subject and widely diverging views on how it should be done, but virtual unanimity regarding the intensiveness of the process.99 What does state-building consist of? Though far from a science—still more alchemy than chemistry at this stage—there are a few principles that draw wide agreement.
In the litany of what must be done, arguably the most vital are the formation and consolidation of the security sector and the establishment of the rule of law.100 The performance of the security sector, comprising those institutions and forces that guard sovereignty and ensure order, determines what civil, social, and economic activities may take place. The military and the police forces, along with their governing—preferably civilian—bodies, shape the environment for civic activity. The state must establish a secure and stable environment for both public and private life. Famously prescribed by Max Weber as the singular defining attribute of a state, “a state is a human community that (successfully) claims the monopoly of the legitimate use of physical force within a given territory.”101 More importantly, Weber specifies the legitimacy of the use of force. No state has or can enjoy a complete monopoly of the use of force—nor would we necessarily want it to do so. However, for the use of force to be legitimate, it must be sanctioned by the state. Historically, the state use of force has been conceived as a responsibility to protect citizens from external aggressors, though in many cases the state itself has been an aggressor. This unpleasant reality has recently been addressed by a growing acknowledgment that the state’s responsibility for security extends beyond its own survival to its population; hence, the emerging concepts of “human security” and “citizen security.”102
Establishing a monopoly of the legitimate use of force in a territory is no mean feat, and cannot be accomplished by brutal methods without sacrificing the legitimacy that is essential to effective governance. There are numerous U.S. government programs that provide assistance, training, equipping, mentoring, and other support to partner governments, both military and civilian agencies, for the purpose of building partner capacity (BPC). No amount of training and equipping, however, can substitute for the social contract between government and governed necessary to establish and sustain legitimacy. This must be achieved by our partners; and in this, our role can only be to help them identify methods, techniques, best practices (to the extent we know them), and lessons to enable their success. Controlling the use of force within sovereign territory, either directly or through delegation, is an essential function of a viable state.
The application of force in society must be bound by the rule of law, the establishment of which is another critical responsibility of the state. The state must establish the rule of law and mechanisms for articulation, adjudication, and redress of grievances. Doing so provides methods for the resolution of social and other disputes within society, provides predictability necessary for commerce, and ensures the security of citizens. The rule of law is not just a question of constitutions or statutes, though they form the legal framework in any country. It also requires that citizens have access to the law and the institutions of justice, and that they are not excluded from legal recourse by cost, language, distance, or identity. Under a genuine rule of law, the state itself is also subject to the law and cannot operate outside the law or the legal system. This includes prohibiting and punishing corruption, especially within government agencies both civilian and military.
In order to execute its required functions effectively and sustainably, the state must invest in and upgrade human capital; it must create a reliable and competent civil service to administer official state functions and manage state assets. The best-known method of succeeding in this is the application of merit-based recruitment, retention, and promotion. It must also provide a social and security environment conducive to education and public health. A professional civil service, including administrators, diplomats, and a range of support personnel necessary to operate complex systems is required to assume the full spectrum of governmental responsibilities.
The state must develop systems, human resources, and institutional mechanisms for raising revenue, securing state financing, and managing state assets and budgets. No state can operate effectively without a stable and predictable revenue stream sufficient to meet the costs of its obligations. There is controversy over what a state’s obligations are and vast variation among states; but whatever obligations compose the social contract between government and governed must be within the financial means of the state. Taxation and regulation of commercial and financial activity is a responsibility only appropriate for the state. A banking system capable of interaction in the global financial network is beyond the capabilities of the private sector alone.
The degree to which the state must or should be involved in commercial activity is debatable, and indeed, the subject of wide historical debate that has generated large and powerful intellectual and political schools of thought on the subject. There is, though, a degree of consensus on the state’s responsibility with respect to creating an environment in which citizens’ economic needs are met.103 The debate is over the balance of responsibility for meeting those needs within a conducive environment. Some argue the state should enable individual economic and commercial innovation, bearing only a modest responsibility beyond that through taxation, promotion of property rights, and limited redistributive programs. Historically, however, in many contexts of state formation and growth, the state has taken an active role in creating the institutions that underpin the market and establishing a pathway to industrial and agricultural development and technological innovation. The state has also intervened in times of acute market failure, including after the Asian financial crisis of 1998, and again after the global fiscal and financial crisis of 2008 to 2009.
Critical to state sustainability is an inclusive national narrative promoting citizenship. While not the exclusive responsibility of the state, typically only the state has access to the nationwide communication systems required to disseminate strategic messaging about national issues, though this is decreasingly true in the era of ubiquitous social media. Still, identity building and creating a national narrative remain governmental responsibilities; states must learn to utilize all emerging media to accomplish this. The drafting and adoption of a national constitution can contribute to an inclusive national narrative, as can elections. These can, of course, be divisive, but that is the craft of statesmanship: being able to manage and utilize such formal processes in support of the national interest. The willingness of citizens to pay taxes to the state is contingent upon an inclusive national narrative and a social contract that citizens accept. An effective state penetrates most aspects of public life, including education, public health, commerce, and dispute resolution, among many others. It can use those platforms to further the forging of an inclusive national narrative. This requires high standards of leadership, without which no state will succeed in any case.
The state must also create accessible mechanisms for interaction between citizens, civil society, and the state. Robust civil society encourages associative behavior conducive to social capital and enables citizens to pursue their interests equitably amidst the competing interests within any state. The state cannot form or create civil society, but it can communicate and interact transparently and responsively with civil society organizations. Furthermore, it can provide and secure the political space needed for their operations. Finding ways to consult and engage citizens, particularly in active and decisionmaking roles, can help secure their participation in and loyalty to the state. Many governmental functions and social services do not need to be carried out by government entities, but if the government can set the policy frameworks and regulations, citizens, nongovernmental actors, and the private sector can be involved in different ways in implementing policies.
The most critical endowment of any state is its human capital. The state is a bureaucratic abstraction composed of people, and can be no better than the people who populate it. This is no less true of government institutions and agencies than of any other organization. An educated and healthy population base is the critical ingredient at the heart of a successful state, as it forms the reservoir from which good leadership is drawn. How does a state invest in human capital? “Singapore provides better schools and hospitals and safer streets than most Western countries—and all with a state that consumes only 19 percent of GDP.”104 Obviously, through budgetary allocations to education and public health programs and institutions, but less obviously by creating a culture of merit-based incentives and disincentives. One of the many successful innovations in Singapore has been its well-compensated, meritocratic public service professions, including, significantly, the Singapore Civil Service. “In Singapore meritocracy reigns all the way down the system.”105
These approaches reflect experience from a diverse array of geographic, cultural, and economic environments, but must not be considered as a sequential or structural template for building effective states. While some generalizations may be widely transferable (i.e., the primacy of security) context-specificity is paramount and several additional factors must be recognized and respected in any effort to support effective state-building. Perhaps most important is host country ownership. As stated in the Paris Declaration on Aid Effectiveness, “Partner countries [must] exercise effective leadership over their development policies, and strategies and co-ordinate development actions.”106 This is to say simply that the recipient or host country (i.e., the assistance receiver) must not only fully embrace the proposed development effort, but be full party to its design and implementation. The host country must “take the lead in coordinating aid at all levels in conjunction with other development resources in dialogue with donors and encouraging the participation of civil society and the private sector.”107 Only then is it likely the country in question will develop the self-organizing dynamics required for the long-term sustainability of institutional development.108 They must “want it” at least as much as we want it.
There is no substitute for good leadership. Our brief examinations of Jordan, Singapore, and Colombia provide evidence that enlightened leaders can, with external support when necessary, overcome the many challenges to building an effective state. “Political leadership is crucial because of the way it affects the quality and autonomy of the bureaucracy.”109 In Jordan, King Abdullah and his successor King Hussein carefully and skillfully navigated the violent tides of conflict and violence in the Middle East, maintaining critical alliances with powerful external allies, and balancing the interests of competing domestic interest groups. Singapore’s Lee Kwan Yew established and enforced a set of standards for ethical behavior in governance that has made Singapore one of the most successful states in the world today. In Colombia, then President Alvaro Uribe and President Juan Manuel Santos, though taking diametrically opposed political stances toward the current peace process, both acted strongly in support of a vision of ending a generations-long war in Colombia.
Good leadership is anchored by the indivertible will to accomplish the necessary objectives and goals of good governance. In the words of Verena Fritz and Alina Rocha Menocal, what is needed is “a political leadership…committed to development and, in most cases, the uprooting of traditional elites.” In the most successful cases, such as the Asian Tigers, “Development was regarded as a ‘national project’ of the first priority.”110 Such will—sometimes referred to as political will—goes far beyond rhetoric, and often requires great compromise, and even self-sacrifice. It often means taking on deeply entrenched and self-defending interest groups. In the absence of good leaders, no effort or commitment on the part of the international community is likely to succeed in building an effective state.
Ultimately, the factor perhaps most underestimated, but of critical importance is time—the amount of time necessary for any development project, let alone a state-building project. It may be advantageous to think of state-building as a process rather than as a project in order to avoid the anticipation of completion that a project suggests. We look forward to the completion of a project; a process may endure indefinitely. We believe state-building is an indefinite process, with no endpoint. The quest for answers to fundamental questions regarding the relation of citizens to the state, and to each other, is eternal. The so-called “developed states” are still building their institutions and governance processes, as is clear from recent developments in Europe. The earliest stages of state-building are especially fraught, and less-developed states must be understood within their evolutionary context. According to the World Bank, “International assistance needs to be sustained for a minimum of 15 years to support most long-term institutional transformations.”111 Considering this estimate must include states not in the midst of conflict, we should assume significantly more time for those suffering from conflict, such as Afghanistan or Iraq; with very limited existing infrastructure or resources like Mali or Guatemala; or with highly complex compositions, for example, the DRC. Imposing unrealistic timeframes onto such polities can result in “premature load-bearing,” where new processes and practices are forced onto institutions before they are tested, and the environment is prepared.112
Development organizations often refer to 5- or 10-year plans; perhaps it would be more appropriate to consider realistic expectations of a country in 25 or even 30 years, and work backwards from there, establishing realistic benchmarks along the way. In that longer trajectory, rational sequencing of priorities may be more strategic. Consider, for example, that many of today’s strong democratic states began their trajectory toward democracy with periods, even prolonged periods of autocratic rule as they built the bureaucratic and economic architecture of the state—a fact we must seriously consider. Research suggests that at least for postconflict states the highest immediate priority—often requiring as long as a decade—must be avoiding a conflict relapse, which may require the prolonged prioritization of security and stability.113 In recent decades, the western emphasis on democratization has, on occasion, hurried processes that historically have taken decades, if not generations. By now, we should have learned the lesson that “democratization before economic take-off, and incomplete democratizations, also entails important risks. In particular, expectations are raised that cannot be satisfied, and clientelistic systems continue or even intensify where the potential authoritarian top-down control is not replaced by effective accountability to citizens.”114 In state-building, patience is not only a virtue, it is a necessity.
Of near if not equal importance to patience, is the need to embrace evidence-based adaptation as the state-formation process proceeds. As execution begins to show results revision of the original plan may be appropriate or required. In state formation, so much is subject to unpredictable and nonlinear developments.115 Patient adherence to a well-conceived plan can easily slide into stubborn insistence if evidence of failure begins to cascade; in which case not only is flexibility required, but rapid adaptation to emerging circumstances.
These reflections provide merely a snapshot of what is entailed in the epic task of state-building. While we offer merely a notional short list of state responsibilities and functions (of which there are many more), described in a summary manner, we emphasize the centrality of the state to sustaining the rule-based world order. A world without order is a frightening prospect that recalls Hobbes’s characterization of the natural state of mankind before government is established as “every man against every man,” and a life that is “solitary, poor, nasty, brutish, and short.”116