Refugee camp for Rwandans in Kimbumba, eastern Zaire (current Democratic Republic of the Congo), following the Rwandan genocide. Photo: CDC.
This interview was conducted in November 2012 as part of a series of interviews for a video documentary on the history of Sphere released later that year. The written version of the interview was reviewed and amended by the interviewee in March 2014.
A Senior Research Associate with the Humanitarian Policy Group, John Borton has worked in the humanitarian sector for 30 years in a variety of capacities. As a Research Fellow at the UK-based Overseas Development Institute (ODI), he led the process of establishing what are now the Humanitarian Practice Network (HPN), the Humanitarian Policy Group (HPG) and the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP). In 1995-96 Borton led the Study III (humanitarian) of the Joint Evaluation of Emergency Assistance to Rwanda (JEEAR).
Twenty years ago, the tragedy of the Rwandan Genocide began to unfold on 7 April 1994. Within just three months, approximately 800,000 men, women and children were killed in a massacre the international community of nations failed to stop. Around 2 million people fled the killings, becoming refugees in neighbouring Tanzania and Zaire (today’s Democratic Republic of Congo). Another million were displaced within Rwanda.
The international humanitarian response to the Rwanda crisis, and specifically to the massive refugee outflow (some 850,000 people fled into the city of Goma in Eastern Zaire in just four days in mid-July 1994), was the largest to date. And not an easy one: the mortality rate in Goma in late July, during a cholera outbreak, was the highest ever recorded in a refugee population. By the end of the year some 80,000 people had died in refugee and internally displaced persons camps, principally from cholera and dysentery.
Seven months after the genocide began, an unprecedented multinational, multi-donor evaluation was launched: the Joint Evaluation of Emergency Assistance to Rwanda (JEEAR). The evaluation consisted of four separate studies, the third and largest of which was specifically dedicated to the humanitarian response and its effects.
There were, of course, excellent agencies that did great work in that dreadful situation. But there were some agencies that really just should not have been there
The evaluation was highly critical of the absence of an effective political and military response to the genocide by the international community. In such a context, humanitarian action effectively became a substitute for political action and humanitarian agencies were forced to work in extremely challenging, almost impossible situations.
For the most part, we found that humanitarian agencies had done good work, but there were some significant failings, including the failure to anticipate and better manage the huge refugee influx into Goma as well as the poor work and lack of professionalism of some agencies. The evaluation made it easier to identify shortcomings and to focus attention on how to address them through some strong recommendations.
One of them was that some form of regulation or enforcement was needed to ensure that agencies adhered to the then recently published Red Cross/NGOs Code of Conduct and the few international standards that were then available. This came out of what we had seen in the response in Goma, where the combination of a nearby airport, huge media coverage and a weakened government authority made it very easy for unprofessional organisations to show up in a cargo plane and set up as ‘humanitarian NGOs’.
There were, of course, excellent agencies that did great work in that dreadful situation. But there were some agencies that really just should not have been there. We came across one NGO that had left people unattended on drips during the cholera outbreak and probably contributed to the avoidable death of some of those affected.
To address the issue of the variable quality and professionalism within the humanitarian sector, Study III’s draft report, submitted in October 1995, recommended the establishment of an NGO accreditation system aimed at ensuring beneficiaries receive an acceptable professional standard of service and care.
This recommendation prompted a lot of discussion. Though we held on to our belief that an accreditation system would be necessary to achieve the change that was needed, we accepted the argument that it was desirable to try to avoid the situation where an external mechanism was imposed on NGOs.
As a result, the final evaluation synthesis report actually offered two options: self-managed regulation by NGO networks or an international accreditation system to be developed by donor, UN and Red Cross agencies and NGOs.
We welcomed the development of a proposal to establish humanitarian standards. We didn’t know where it was going, how it would evolve, but it looked like a good initiative
Soon after we had submitted our draft report, we learnt that preliminary discussions were already under way between some of the largest NGOs to develop a set of humanitarian standards. During the weeks until the final report was submitted, there was a sort of parallel process going on between the Study III team and those involved in those discussions.
The knowledge that the evaluation was likely to recommend an accreditation mechanism gave impetus to those discussions. The ‘cross-fertilisation’, so to speak, was facilitated by the fact that some individuals – Peter Walker, Nicholas Stockton and Joel McClellan – were both involved in those discussions and sitting on the JEEAR Steering Committee.
In the final version of Study III we welcomed the development of a proposal to establish humanitarian standards. We didn’t know where it was going, how it would evolve, but it looked like a good initiative so the final report gave support to what eventually became the Sphere Project.
The JEEAR report also recommended the creation of a humanitarian ombudsman function to establish a system whereby complaints about agency performance could be brought before an independent authority. One year later, that led to the development of the Humanitarian Ombudsman Project by some British agencies, which eventually became the Humanitarian Accountability Partnership (HAP) International.
The large JEEAR Steering Committee brought together donors, the UN, the Red Cross and NGOs – which in those days was very unusual. Subsequently, we developed the idea for creating a similar sort of group to focus specifically on issues of accountability and performance. That was how the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP) came into being.
As well as being a massive human tragedy, the Rwandan Genocide had been traumatic for humanitarian agencies and prompted them to reflect on how to improve their performance and accountability.
It was certainly the biggest evaluation that had been carried out of a humanitarian operation at that time. It confronted a lot of issues and it looked at how the whole system had performed – that was new
An issue that particularly exercised agencies at the time was that in their immediate response to the huge influx of refugees into Eastern Zaire they were not only assisting civilians, but also the very people who had been involved in perpetrating the genocide and carrying out the killings. The realisation that they had been ‘feeding the killers’ came to haunt the agencies.
In our view, that problem was not the primary responsibility of humanitarian agencies. It needed the intervention of the UN Security Council and military forces. A peacekeeping force, proposed by the UN, was to provide security in the camps, separate the different groups within the refugee population and move the camps further back from the border with Rwanda, but only one country offered troops and the proposed force did not materialise. This placed humanitarian agencies in an impossible situation.
Agencies were often running their services during the day and pulling out of the camps at night for security reasons. It was a huge dilemma. Most agencies took the view that the majority of the refugees needed their support and they continued providing it. Some other agencies chose to halt their operations and withdraw from the camps.
It was a traumatic time; agencies were all traumatised. I think the Rwanda evaluation gave momentum and a focus to that angst. And so, yes, the evaluation did provide a vehicle for significant change. People called it a ‘landmark’ evaluation. It was certainly the biggest evaluation that had been carried out of a humanitarian operation at that time. It confronted a lot of issues and it looked at how the whole system had performed – that was new.
Looking back, it was a privilege to be involved in an evaluation that put some good recommendations on the table and helped the international and the NGO communities embark on a journey that actually went on to achieve important changes. Some of the impetus for those changes was already there but I think it is fair to say that the evaluation gave added momentum, and to some extent, helped channel it.
The late 1990s and early 2000s was a very creative period in the sector; there were a lot of positive things happening. The Red Cross/NGOs Code of Conduct was published in 1994, in the middle of the Rwandan Genocide. Then, as I’ve already mentioned, ALNAP, HAP and the Sphere Project came into being. People In Aid first published its Code of Good Practice in 1997.
Has such a ‘revolution’ realised its potential? (…) I wonder whether we haven’t actually walked around some of the issues the Rwanda evaluation tried to confront
In the francophone world, initiatives like Coordination SUD and Groupe URD were also working on quality of aid issues at the time. The latter began developing its method of quality assurance for humanitarian projects in 1999.
In 2000, I gave a presentation in Geneva and deliberately posed the question in the title: “Is this an accountability revolution?” I wanted to raise the profile of all that was going on and encourage people to view it as a possible revolution within the sector.
Has such a ‘revolution’ realised its potential? It has achieved a lot. But whilst I admire all that has been done, I wonder whether we haven’t actually walked around some of the issues the Rwanda evaluation tried to confront. Compromises have been made and the structures that have been created are less than perfect – that’s why I think many of the challenges are still there.
The sector is significantly larger and more complex than 20 years ago. There are a lot more agencies and people involved and the funding levels are significantly higher. The range of areas of assistance has also increased – psychosocial services, education and livelihoods are now regarded as basic humanitarian services. There are new actors involved – from the private sector to the military. The sheer complexity of the system makes it harder to address the outstanding issues now than was the case 20 years ago.
A key outstanding issue is that, despite all the work on standards and accountability, during the response to the 2010 earthquake in Haiti unqualified and unprofessional agencies were able to land their planes at Port-au-Prince airport and set up programmes on the ground – just as had happened in Goma 20 years before. They simply shouldn’t have been allowed in!
That all our collective efforts to improve accountability and performance have not produced a mechanism to prevent this from happening I find depressing. There is still a need for a system for accrediting international humanitarian agencies so as to prevent unqualified and unprofessional agencies from getting access to the affected population.
Some of the issues that prompted Sphere and all that energy around quality and standards after the Rwandan Genocide still haven’t been fully sorted out
So, although I’m proud of what the Rwanda evaluation achieved and set in motion, I’m disappointed that the system has not been better able to address the issue of professional competence and therefore who is allowed to work in emergency operations and who is allowed to raise funds as a ‘humanitarian agency’.
Many of the challenges are still there and have actually become more complex. The difficulty of addressing these challenges has increased.
Although the Sphere Project’s establishment and development is a good and positive story, the fact remains that some of the issues that prompted Sphere and all that energy around quality and standards after the Rwandan Genocide still haven’t been fully sorted out.
I think Sphere has achieved a lot in terms of providing a common language across societies, cultures and different languages. However, I do wonder how many agencies actually monitor their performance against Sphere standards and share the results with their supporters and other agencies. Some donors require reporting against Sphere standards, but the majority don’t.
Wouldn’t it be fantastic if all agencies reported their progress in ‘meeting or exceeding the Sphere standards’ in every operation? Such information would be incredibly useful for coordination, for advocacy and for learning purposes.
So, personally, I don’t think Sphere’s potential for driving quality improvement has yet been fully exploited. You have this great set of standards and indicators developed by experts from a large number of agencies that provide a framework for improving performance in the humanitarian sector… But I think it needs more push, more conscious monitoring of those standards and indicators and more openness amongst agencies to share their results to really exploit the full potential of Sphere.