By Tristan Hale, Head of Digital and Communications, Sphere
In a recent survey of humanitarian organisations conducted by Sphere:
The 59% of respondents who claimed to have at least some knowledge of AI risks identified inaccuracy of outputs, dependency on AI, and dehumanisation of aid as the most serious.
What do humanitarian organisations perceive as the most serious risks of AI?
AI has the potential for good in the humanitarian sector, but also has the potential to cause harm to vulnerable people.
How can humanitarian organisations – or people within those organisations responsible for procurement – make good decisions on when, where and how to use AI? How can these organisations demonstrate to potential donors that they are using AI safely and responsibly? Or looking from the other side, how can donors ensure their potential grantees will use AI responsibly?
How can we support humanitarians that want to benefit from AI to do so safely?
There are several possible approaches to answering these questions, including capacity building; standards and/or guidance; and certification.
Sphere is currently working with Nesta (the UK’s innovation agency for social good), Data Friendly Space, and CDAC Network – supported by FCDO and the UKHIH on scoping an AI Safety Label. What if you could demonstrate that your organisation’s use of a particular AI platform in a particular context exceeds a reasonable safety threshold?
Our AI Safety Label concept is built on three key components:
As part of our research, we’ve interviewed dozens of people including auditors; standards providers; and people in humanitarian organisations in field, procurement, policy and technical roles. We’re grateful to everyone that has offered us their time, for telling us what they like and what won’t work regarding these three components.
We asked participants of our Global Focal Point Forum (GFPF) in Antalya to design what the label could look like, and about the feasibility of an organisational assessment of the capacities required to use AI safely.
This session confirmed that there is great interest in AI in the sector but that relatively few organisations currently have the capacity to navigate the technical complexities, legal requirements and ethical considerations to make good decisions about AI.
Aleks Berditchevskaia (Nesta) facilitates a group activity around organisational capacities for AI safety at GFPF 2024, Antalya, Türkiye, November 2024
To test stage three, we organised two workshops, in Turkish, with people from communities affected by the serious earthquake in February 2023.
As part of this process, we asked participants how they felt about an AI platform using satellite imagery to estimate earthquake damage to their homes.
Sphere trainer Zeynep Sanduvaç (Nirengi Derneği) and Esther Moss (Nesta) facilitate a community testing workshop in Antakya, Türkiye, November 2024
An interesting takeaway from this workshop is that in cases where there is dissatisfaction with human decision-making, AI may be welcomed rather than feared. In the words of one participant (translated from Turkish):
“I believe that AI can evaluate building damages without being influenced by emotions. In large-scale disasters like the Kahramanmaraş earthquake, humans are deeply affected, which could make their assessments less accurate.”
We tend to set a very high bar for AI in terms of accuracy, lack of bias, etc., while simultaneously tolerating bias, emotional affectations, and political agendas in humans. But this is with good reason: If a human makes a mistake, they can be held accountable. If an AI platform makes a mistake, it may be difficult to determine who is responsible.
Our research has shown that the safety label concept resonates with humanitarians, and we’re now iterating the design to ensure it meets the varied needs across the sector.
If your organisation is exploring the intersection of AI and community engagement and wondering how to use AI safely and responsibly, please get in touch.
For media enquires or follow-up on the AI safety label, please contact communications@spherestandards.org