Login
Recover password
Registration

Members can log in to create events, publish stories, share resources and modify their password and newsletter subscription.

E-mail *
First name *
Last name *
Language preference *
Newsletter options *

By clicking below to submit this form, I hereby agree to the Sphere’s Privacy Policy and terms of use.

How can humanitarian organisations use AI safely?

By Tristan Hale, Head of Digital and Communications, Sphere

Use and perceptions of AI in humanitarian organisations

In a recent survey of humanitarian organisations conducted by Sphere:

  • 51% of respondents said that their organisation was using – or exploring the use of – Artificial Intelligence (AI);
  • 57% had received no training or guidance on AI, or weren’t aware of any organisational policies on its use; and
  • just 6% claimed to be highly familiar with the risks of AI.

The 59% of respondents who claimed to have at least some knowledge of AI risks identified inaccuracy of outputs, dependency on AI, and dehumanisation of aid as the most serious.

Inaccuracy, e.g., ‘hallucinations’ in generative AI outputs) 44%; Dependency on AI/loss of control 44%; Dehumanisation of aid (reduced human contact) 44%; Cybersecurity threats / Personal privacy violations 38%; Misinformation and manipulation 29%

What do humanitarian organisations perceive as the most serious risks of AI?

 

To AI or not to AI?

AI has the potential for good in the humanitarian sector, but also has the potential to cause harm to vulnerable people.

How can humanitarian organisations – or people within those organisations responsible for procurement – make good decisions on when, where and how to use AI? How can these organisations demonstrate to potential donors that they are using AI safely and responsibly? Or looking from the other side, how can donors ensure their potential grantees will use AI responsibly?

How can we support humanitarians that want to benefit from AI to do so safely?

 

AI Safety label

There are several possible approaches to answering these questions, including capacity building; standards and/or guidance; and certification.

Sphere is currently working with Nesta (the UK’s innovation agency for social good), Data Friendly Space, and CDAC Network – supported by FCDO and the UKHIH on scoping an AI Safety Label. What if you could demonstrate that your organisation’s use of a particular AI platform in a particular context exceeds a reasonable safety threshold?

 

A three-stage assessment process

Our AI Safety Label concept is built on three key components:

  1. Technical assessment: Test the AI platform against metrics like performance, accuracy, usability, accuracy, bias, resource utilisation, transparency, explainability, latency, speed, etc.
  2. Organisational capacity assessment: Check that the humanitarian organisation intending to use the AI platform has the capacity to do so. For example, if the AI model requires personal or sensitive data, does the organisation take sufficient care over cybersecurity? Do staff have the correct training to use the AI platform?
  3. Social acceptability and risk assessment: A system that performs well in one context may be harmful in another, and one part of determining appropriateness in context is acceptance by the community that will be affected by decisions based on the outputs of the AI platform.

As part of our research, we’ve interviewed dozens of people including auditors; standards providers; and people in humanitarian organisations in field, procurement, policy and technical roles. We’re grateful to everyone that has offered us their time, for telling us what they like and what won’t work regarding these three components.

 

Sphere network consultation

We asked participants of our Global Focal Point Forum (GFPF) in Antalya to design what the label could look like, and about the feasibility of an organisational assessment of the capacities required to use AI safely.

This session confirmed that there is great interest in AI in the sector but that relatively few organisations currently have the capacity to navigate the technical complexities, legal requirements and ethical considerations to make good decisions about AI.

Aleks Berditchevskaia (Nesta) facilitates a group activity around organisational capacities for AI safely at GFPF 2024, Antalya, Türkiye, November 2024

Aleks Berditchevskaia (Nesta) facilitates a group activity around organisational capacities for AI safety at GFPF 2024, Antalya, Türkiye, November 2024

 

Community testing in Hataya

To test stage three, we organised two workshops, in Turkish, with people from communities affected by the serious earthquake in February 2023.

As part of this process, we asked participants how they felt about an AI platform using satellite imagery to estimate earthquake damage to their homes.

Sphere trainer Zeynep Sanduvaç (Nirengi Derneği) and Esther Moss (Nesta) facilitate a community testing workshop in Antakya, Türkiye, November 2024

Sphere trainer Zeynep Sanduvaç (Nirengi Derneği) and Esther Moss (Nesta) facilitate a community testing workshop in Antakya, Türkiye, November 2024

An interesting takeaway from this workshop is that in cases where there is dissatisfaction with human decision-making, AI may be welcomed rather than feared. In the words of one participant (translated from Turkish):

“I believe that AI can evaluate building damages without being influenced by emotions. In large-scale disasters like the Kahramanmaraş earthquake, humans are deeply affected, which could make their assessments less accurate.”

We tend to set a very high bar for AI in terms of accuracy, lack of bias, etc., while simultaneously tolerating bias, emotional affectations, and political agendas in humans. But this is with good reason: If a human makes a mistake, they can be held accountable. If an AI platform makes a mistake, it may be difficult to determine who is responsible.

 

Next steps and call to action

Our research has shown that the safety label concept resonates with humanitarians, and we’re now iterating the design to ensure it meets the varied needs across the sector.

If your organisation is exploring the intersection of AI and community engagement and wondering how to use AI safely and responsibly, please get in touch.


For media enquires or follow-up on the AI safety label, please contact communications@spherestandards.org