Posted by Brett Dodge

The value of ‘counting’ in modern slavery due diligence

Attempts to ‘count’ victims of modern slavery are proliferating. In this blog I caution against putting too much reliance on what must necessarily be broad estimations, particularly in the context of assessing where supply chain risks lie, and suggest that there are other, better sources of risk information.

A new global estimate of forced labour for 2017 was published in September, by Alliance 8.7 – a partnership of the International Labour Organization (ILO) and the Walk Free Foundation (WFF).  The two groups, plus the International Organization for Migration (IOM) have come up with the following figures:

  • there are an estimated 40.3 million people trapped in modern slavery
  • of which 71% are women
  • and of which 16 million are in forced labour in the private sector
  • 2% of which are in construction, 15.1% in manufacturing, 11.3% in agriculture and fishing,
  • with slavery-like situations lasting approximately 18 months on average.

In October this year, the UK Home Office reiterated its own estimate of between 10,000-13,000 potential victims of modern slavery in the UK. The publication of these numbers sparked some vigorous debate in our office.

None of us deny that ‘modern slavery’ is a grave global problem that must be eradicated. The issue is that forced labour is a hidden and clandestine activity which cannot really be measured with any accuracy, yet no matter how often these sorts of numbers are presented as estimates, there is a tendency to present them as facts – and facts that are driving policy responses to modern slavery by some governments and businesses. In our work, this has implications for how modern slavery risk is perceived in supply chain due diligence.

Transparency rather than perfection

Other commentators have previously published detailed critiques of slavery-counting methods that don’t need to be restated here.  But, it is worth highlighting a key element of the methodology.  The Alliance 8.7 estimates are based on Gallup polling data, asking questions about history of labour exploitation to a sample of households. The polls were administered to 17,000 participants in 54 countries. Actual data on victims from governments and the IOM is used to plug gaps but this cannot hide the fact that the polling base is quite narrow, especially given that reported cases are unlikely to represent the true extent of the problem.

The polling results, along with other data, are incorporated into sophisticated algorithms to extrapolate estimates of slavery for certain sample countries. These extrapolated figures are then used to further extrapolate an estimate for all other countries within the same ‘cluster’ (sharing a risk profile). The Global Slavery Index – GSI – (also developed by the Walk Free Foundation, but a completely separate exercise and not recognised by the ILO) uses a comparable method for data extrapolation which, interestingly, was originally a method for estimating the number of fish in a Swedish fjord.

Notice that I’m using the word ‘extrapolate’ quite a lot here to explain how this is put together. This basically means there are several layers of processing wherein findings are scaled up based on assumptions. In itself, this is not a problem and is probably the only way that a hidden phenomenon like modern slavery can be estimated, but the numbers should be accompanied by a big, bold caveat or ‘health warning’ explaining what can be reasonably interpreted from the data.

Sure, improvements in methodology and presentation are being made all the time and the Alliance 8.7 work demonstrates this (see the discussion of migration on page 58 of the report for example). But these modern slavery figures can mislead us into thinking we know more about the where and how of modern slavery than we do. Without transparency about what the numbers actually mean or guidance on how they should be used, there is a risk of misuse as part of supply chain risk assessments.

Caution on country estimates

This is less of a problem with the Alliance 8.7 estimates as these only provide data at regional level. But the Global Slavery Index (GSI) – again to be clear, a separate exercise from the Alliance 8.7 estimate – does provide rankings on a country-by-country basis. Care should be taken in using the GSI numbers for supply chain due diligence on modern slavery. There are various reasons for this.

First, there are some definitional issues. When companies talk about modern slavery risk, there is an in-built assumption that they are referring to forced labour. Yet in the GSI, forced labour is not separated from trafficking, from forced domestic work, sexual exploitation, children, or other categories of abuse under the ‘modern slavery’ terminology umbrella. WFF does make this clear, but you need to read the fine print.

Second, there is an issue related to how slavery estimates are tallied that creates the risk of misinterpretation. As an example, if a business is using the GSI map as a screening tool for modern slavery risk, they see that Romania is more ‘red’ than say Italy, Spain or the UK. The conclusion would be that Romania has more modern slavery. Yet in practice, most reports of forced labour associated with Romania involve workers recruited in Romania to work in other countries – i.e. it is a high risk as a country of origin and transit, rather than a destination country. There is a similar discrepancy in the rankings of Cambodia and Thailand, where Cambodian forced labourers found in Thailand are attributed to Cambodia. For supply chain due diligence, it is more relevant to know where forced labour is located, rather than where workers originate (though the latter is relevant for root cause analysis.)

To put it plainly, use of this data alone can lead to certain countries, sectors or regions being wrongly prioritized or de-prioritized for action on modern slavery. To be fair, this kind of supply chain due diligence it isn’t one of WFF’s explicitly stated intentions for the GSI’s use, but this comes back to the importance of foregrounding caveats about definitions and limitations.

More and better sources

The thing is, businesses can have better sources of information on forced labour risks: they are just not based on estimating numbers of workers in slavery. When we at Ergon produce risk assessment tools for corporations and multi-stakeholder initiatives, we use a variety of data-sources that are relevant to various dimensions of modern slavery risks and the underlying causes of the phenomenon – such as poverty, poor governance, migration. We think it is important that these sources are transparent, but we also stress that rankings and league tables can only go so far. It is important for clients to follow through with a reality check before decisions are made.  This ‘verification’ takes a bit of digging but most companies already have a range of tools and resources that they can use – including news reports, their own staff, local stakeholders and experts. They are best placed to know about the realities of operating environments noting any country-specific context and structural factors associated with forced labour, like poverty-driven migration, extensive casual working patterns and/or unethical recruitment practices. This builds a deeper understanding of a country, region or industry’s risk profile and a more robust basis for action.

Why we measure – the McNamara fallacy

This raises the question of why are we so beholden to quantitative data when the stronger evidence is mostly qualitative? When we know there are statistics, there is, I think, a tendency to assume automatically that they provide a stronger standard of evidence. Statistics tell a simpler story, where in theory, there is less room for misinterpretation, encouraging us to overlook their weaknesses and limitations. This mentality arises from our wider cultural dependency on data, which is best summed up as a general prevailing attitude of “if you can’t measure it, it doesn’t exist.” This logical misconception has a name: “the McNamara fallacy”. Robert McNamara was the U.S. Defense Secretary during the Vietnam War whose wartime strategy relied solely on data models and metrics to the exclusion of any -even if contradictory- qualitative evidence. If you’re interested in why this is problematic, I recommend reading this article: “According to U.S. Big Data, [the U.S.] Won The Vietnam War”.

The lesson here is that we should base important decisions, like prioritizing where to focus on eliminating modern slavery from supply chains, on the best evidence at our disposal. In this case, numbers are only part of the picture.