AI and the third sector: Catch-22?
By Kye Lockwood, CEO, DataKind UK. Reproduced with permission from Rank Ripples magazine
Using AI is a challenge for a sector rooted in values, ethics, and justice, and it doesn’t have an easy solution.
At DataKind UK, we've been going back and forth debating the question ‘How can the third sector use generative AI tools responsibly?’ So much so that we recently hosted a webinar posing that same question to try to gain some more answers.
It's a challenge for a sector rooted in values, ethics, and justice. And, like so many of the challenges we address, it doesn’t have an easy, one-size-fits-all solution.
No universal answer
You might expect, as 'data experts,' that we'd have all the answers. However, we believe that defining 'responsible AI' must be rooted in your organisation's values, not a universal checklist. When we support third sector organisations with their impact, we don't claim to be the experts on what constitutes impact (that's for you and your beneficiaries to decide), and likewise, we can't be prescriptive about what responsible AI looks like for every organisation.
What do we mean by ‘AI’?
Before we go any further, let’s define what we’re talking about when we talk about AI. Here we’re focussing on generative AI, artificial intelligence that creates new content, whether that's text, images, or even videos. Think of ChatGPT writing emails, DALL-E creating images, or tools like Claude helping with reports.
Unlike traditional software that follows set rules, these systems learn patterns from massive amounts of data and use that knowledge to generate something new. These models work by predicting what should come next.
When you ask ChatGPT a question, it's essentially making incredibly sophisticated guesses about what the most likely response would be, based on patterns it learned during training. It's remarkably good at this, but it's still making educated guesses, and crucially, not accessing a specific source of truth.
Risks, inaccuracy, and global costs
Indeed, what makes these tools so convincing can also be their downfall. AI can confidently present completely false information: earlier this year the BBC found that 51% of AI-generated news summaries had issues. These systems can perpetuate existing biases, struggle with mathematical calculations, and create what's increasingly known as 'slop': AI summaries of dubious usefulness filling previously helpful platforms.
Then there’s the significant environmental and social costs that many charities (especially those focused on sustainability or social justice) need to confront when considering GenAI use. According to MIT “researchers have estimated that a ChatGPT query consumes about five times more electricity than a simple web search”* and the infrastructure required for AI is incredibly resource intensive.
Coupled with this is the human cost. Behind every ‘intelligent’ system are human workers, often in the Global South, labelling data and reviewing AI outputs. These roles frequently involve disturbing content, and are typically low paid, with poor working conditions.
Making responsible choices
The question for your charity becomes stark: Does using AI tools align with your organisation’s values around environmental protection and fair labour practices? This tension sits at the core of responsible AI adoption.
At DataKind UK, our most fundamental approach is one that echoes throughout all our work: start with the problem you are trying to solve and not the technology. We've seen charities deploy generative AI for the more typical use cases such as note taking or content generation for fundraising, reports or marketing.
But we’ve also seen innovative, positive uses:
Citizens Advice's ‘Caddy’ copilot, that helps advisors quickly find information.
To analyse survey responses and draw out themes that would have taken weeks to identify manually.
A framework you can use
Unfortunately, we can't hand you a ready-made checklist for responsible AI, although the folk over at mySociety's developed an excellent AI framework that can be applied to charities. It covers six domains:
Practical questions: Are we solving a real problem, or working backwards from a solution? Is this the best way to address our challenge, or are we just excited by new technology?
Societal questions: What are the best and worst-case scenarios of our AI use? Is this shift consistent with our strategy and ethical framework?
Legal and ethical questions: What's the nature of the organisation producing these tools? Is the training data publicly available? Are we comfortable with the intellectual property implications?
Reputational questions: Does this tool touch on areas requiring high accuracy or trust? Could it create potential for bad faith attacks on our services?
Infrastructural questions: What are the long-term costs of deploying this tool? Do we have the skills to manage it sustainably?
Environmental questions: Are we tracking the ongoing environmental impact? Are there more efficient alternatives?
Asking hard questions
Responsible tech use means using it strategically where it genuinely adds value, while advocating for more sustainable and ethical AI development, such as championing open-source tools, using ‘frugal AI’, and interrogating AI supply chains.
Ask hard questions:
Do these tools align with your values?
Are there less resource-intensive alternatives?
Remember, your goal isn't to be cutting-edge; it's to better serve your mission in a way that doesn't undermine the very causes you're working to support. Sometimes that means using AI responsibly. Sometimes it means choosing not to use it at all.
*Estimates vary, and a major difficulty is lack of transparency and data, though public pressure is slowly changing that.