20 July 2022
Magic Numbers? Why the Politics of Indices Are a Problem Rather Than a Solution
story highlights

Rankings frequently influence how state behavior is perceived, how states react, and how they develop responsive strategies.

However, rankings always contain value judgements, methodological choices, and also implicit political aims.

Uncritical acceptance of rankings can therefore lead to unintended internalization of normative assumptions that could lead to poorer, not better, public policy outcomes.

By Paul Jackson, Professor, School of Government and Society, University of Birmingham, UK, and Louis Meuleman, Visiting Professor, Public Governance Institute, Leuven University, Belgium. Both are members of the UN Committee of Experts on Public Administration (CEPA).

In Douglas Adams’ Hitchhiker’s Guide to the Galaxy, the supercomputer Deep Thought calculated the answer to the ultimate question of life, the universe and everything, as ‘42’. In a similar way, numerical indices seek to distil complex systems into simple numbers that provide comparable rankings. We argue that such approaches can be as problematic as they are useful.

The think tank Sustainable Development Solutions Network (SDSN) recently published its annual SDG index, which ranks the progress countries have made on the UN Sustainable Development Goals. As usual, the highest-ranking countries are Finland, Denmark, Sweden, and Norway. Bhutan, a country known for having sustainable development at the heart of its policy and practice, ranks only 70th. In the SDSN index, Ireland ranks ninth. As Ireland ranks second in the Transitions Performance Index published by the European Commission, and only 15th in the World Happiness Index (also by SDSN), we might conclude that excellent transition performance does not make people happy; or could these indices be based on different assumptions and data?

Rankings frequently influence how state behavior is perceived, how states react, and how they develop responsive strategies. However, rankings always contain value judgements, methodological choices, and also implicit political aims. Uncritical acceptance of rankings can therefore lead to unintended internalization of normative assumptions that could lead to poorer, not better, public policy outcomes.

Dozens of global actors, national organizations, private sector actors, and non-governmental organizations (NGOs) currently issue rankings. The ‘Big Three’ private credit agencies rate and rank the creditworthiness of states. An international NGO, Transparency International, produces a Corruption Perceptions Index. The World Bank’s Doing Business Index rates regulatory environments. The US NGO Freedom House stratifies states into “Free,” “Partly free,” and “Not free.”

Over the last two decades, international ranking indices have emerged as an important tool for those engaged in governance. Governments are now ranked by a bewildering array of indices aimed at a wide range of national policymakers, transnational activists, bureaucrats, and media. They are based on at least three related trends: 1) the wider use of performance management in modern political life; 2) the strengthening of global networks and the need for standardization, comparability, and evaluation; and 3) the expansion of new data sources and the use of new technologies.

There is a difference between ratings and rankings. Most ratings are designed to independently measure entities against a set of criteria. However, rankings are a zero-sum game: like a budget, a ranking is a situation in which one person or group can win something only by causing another person or group to lose. States, for example, are assigned a ranking relative to other states thereby conferring approval or disapproval on relative state performance. Ranking also provides significant room for a wide variety of claims based on marginal changes in ranking.

This leaves a question about what the uses of rankings in global governance are. We contend that there are four basic roles, although these are not mutually exclusive:

  1. Rankings may provide expert judgements on the performance of states and entities;
  2. They may enhance the ability of global institutions to regulate or monitor the behavior of states;
  3. They may be effective advocacy tools for specific issues;
  4. They can also act as ‘flag planting’ tools to establish credibility of NGOs or other organizations seeking to provide leadership.

At root most rankings are designed to exert normative pressures on states to conform to what the index measures, promote change or change policies. The proliferation of indices represents a form of technology of governance, creating, disseminating, and evaluating new forms of knowledge. It is in line with the global increase of standardization, measurement, and evaluation of complex social processes. This institutionalizes a view of the modern world as being measurable, comparable, and stratified, and existing within a form of competitive league with states striving to constantly perform better. However, although this world view is popular, it is certainly not universal.

The official composite indicator (in fact an index) on SDG 17.14 – policy coherence for sustainable development – has eight sub indicators pointing at very different institutional challenges such as leadership, horizontal and vertical coordination, and stakeholder participation. Bringing them together in one number has little value added. One country can have the highest score on institutional leadership while scoring lowest on inclusiveness. Another country with the same index result could score the other way around.

Lack of underlying clarity has led to a very mixed set of responses from states. Some governments are keen on showing their improving ranking in public media, taking out ads and providing press coverage in places like The Economist. Others use rankings to identify specific points that can be used to launch policy initiatives. States may also attack the basis of the index itself. China criticized the World Bank’s Doing Business Index in 2013, and during the Eurozone crisis, the EU called credit rating agencies “counterproductive.” Such criticisms are frequently backed up by examples like the Fragile states index failing to identify all of the fragility within North Africa that turned into the Arab Spring, or the failure (or complicity) of credit rating agencies to mitigate the financial crash of 2007-2008.

The universal acceptance of rankings as being “scientific” has expanded. Generally, an index is more accepted the more sophisticated it is. If something looks scientific then it must be science, right? The more complex and convoluted the methodological infrastructure underlying the index is, the more credible it seems, even if it contains significant errors. This may be enhanced by the involvement of significant scholars who give an additional layer of legitimacy, or confirmation bias. The plausibility of ranking results also plays a role in the acceptance of an index: if the top/bottom ranked countries are in line with expectations, the underlying index calculations/data are often considered as correct, even if some countries might have a surprisingly high or low ranking.

Much criticism of rankings focuses on the technical approaches taken by rankings and also the positioning of the entities within the ranking itself. This is all really important, but it does detract from analyzing the process of ranking itself. Constructing a ranked index is a practice that embodies a number of assumptions about the world, including that competition is good, or is even a reasonable way to compare states as diverse as Chad and Luxembourg. A “successful” ranking is the one which has managed to convince the audience that its show is the reality itself (e.g. Barankovic 2022).

Although we see the merits of SDG indices like the one produced by SDSN, as they generate debates on what works well and what not, they also create confusion. SDSN’s SDG Index, for example, uses a partially different set of indicators than Eurostat for SDG performance of EU countries.

A numerical index resembles a single bullet directed at a moving target. That doesn’t sound very rational. Indices are only a means, not an end. They can only indicate and inform and not prescribe the necessary course of action. However, indexes and the indicators that constitute them can be used as part of a broader approach in which the accountable and inclusive process of defining the necessary interventions is central. In the end, it’s the quality of the conversation that counts, not the number.

The UN Committee of Experts on Public Administration was established by the Economic and Social Council (ECOSOC) in 2001. Its 24 members meet annually at UN Headquarters in New York, US, to support the work of ECOSOC on the promotion and development of public administration and governance, including in relation to the 2030 Agenda for Sustainable Development and the SDGs.

related posts