Can charities use AI responsibly?

By Dulcie Vousden, Head of Data Science, DataKind UK

“Generative AI can be a valuable tool, helping you work more efficiently, reach more people, and focus resources where they matter most. But it comes with genuine environmental and social costs that can't be ignored.”


Since ChatGPT’s launch nearly three(!) years ago, we've been on a journey of deep reflection about generative AI. We know that engaging thoughtfully in this technology, especially one surrounded by relentless hype, is difficult in a sector that is already operating beyond capacity.

We approach this question similarly to how we think about our role in creating impact more broadly. Data is incredibly powerful for understanding needs, evaluating outcomes, and informing decisions, which is precisely why we've made it our mission to help all third-sector organisations achieve their missions by using data more effectively. But we recognise that effective data use will mean different things to different organisations.

In the same spirit, we've found it challenging to be prescriptive about what 'responsible AI' means for the entire third sector. It's not our goal that every organisation rushes headlong into using genAI tools, but nor do we want the technology to be rejected wholesale.

There are genuinely exciting use cases that show what's possible when the technology is considered as one of many approaches and then thoughtfully applied to real problems: 

Our goal with this guide is to empower you and your organisation to make your own values-based decisions about what responsible AI looks like in your particular context. It offers the crucial knowledge about how these tools actually work, the risks that often remain hidden, and the essential questions that can guide your thinking.

What risks do I need to think about?

The risks we’re covering here relate specifically to Generative AI: artificial intelligence that creates new content, whether it’s text, images, or videos. This could include using a tool like ChatGPT to write a grant proposal; DALL-E to create images for a social media campaign; or Claude to generate code that can analyse data. For more about how these systems work, explore our primer and webinar at Understanding Generative AI: Breaking Down the Technology Behind the Hype. Other approaches, such as Machine Learning to find patterns in existing data, come with their own risks and challenges, but we aren’t covering those today.

Generative AI risks operate at two different levels, and distinguishing between them matters when deciding whether and how to use it.

Level one: How you use it

Use-level risks emerge from how you or your organisation uses AI. The encouraging part is that you can meaningfully reduce these risks through careful practices.

Hallucinations are a common issue. These are AI outputs that sound plausible, but are factually incorrect. Because generative tools and models are designed to predict realistic-sounding content rather than truth, they can generate convincing language that isn't accurate. This makes them risky for tasks like obtaining citations, referencing legal cases, or providing direct service. There’s also the challenge of replicability. Unlike traditional software that produces consistent outputs, generative AI creates different responses each time you ask the same question, leading to inconsistency in both quality and content. You can manage this risk by ensuring human review of all AI outputs before they're used.

Bias is another significant concern. GenAI tools can inherit and amplify biases from their training data. These biases often reflect societal inequities around race, gender, class, and more, which can perpetuate harm against the very communities third sector organisations aim to serve. When I asked a generative AI tool to "draw a picture of what my life looks like based on what you know about me," the system depicted me as a white man, having learned only that I work in data and can code. 

As researcher Mikaela Pitcan from Data & Society notes, "The assumption of 'objective data' frees people from acknowledging structural inequality.", leading to bias in decision-making or treatment

Data privacy poses practical risks as well. When you share information with AI systems, sensitive details about your donors, beneficiaries, or organisation may be stored, used as training data, and made available to third parties. Different AI providers have varying policies about data retention and usage, with free tools often offering fewer privacy protections than paid versions. The complexity of genAI systems can also introduce vulnerability. These tools may be susceptible to hacking, data manipulation, or other cyber threats, posing risks to the integrity and confidentiality of organisational data.

Reputational risk is also a concern. The misuse or failure of genAI tools can lead to significant reputational damage, erode public trust, and damage credibility. For instance, a chatbot providing insensitive responses to marginalised individuals, or an off-brand, factually incorrect AI-generated report. Again, this can be mitigated through processes like quality review and clear guidelines about when AI outputs are appropriate.

Staff wellbeing and development within your own organisation deserves consideration as well. The increased availability of AI may create anxiety about job security and skills devaluation. Constantly reviewing AI outputs can be cognitively draining, and feel less meaningful than original creative work. Team dynamics could be affected if AI replaces human interaction and collaboration.

Level two: Issues with the systems behind the tech

System-level risks are inseparable from the technology and exist regardless of how you use it. They are the cost of participation. Using AI tools means accepting that you're contributing to these harms, even indirectly. This is why the decision about whether using AI at all is fundamentally a values question, not just a risk management exercise.

Copyright infringement sits at the foundation of many current AI systems. Creating the models that underpin tools like ChatGPT and Claude involves scraping vast amounts of books, articles, and other copyrighted materials, without compensation or permission from their creators. Several major lawsuits are currently challenging this practice. For organisations committed to fair compensation and respecting creative labour, using AI tools means grappling with the uncomfortable reality that these systems may be built on a foundation of copyright infringement on a massive scale. Read more at: AI startup Anthropic agrees to pay $1.5bn to settle book piracy lawsuit | Artificial intelligence (AI) | The Guardian, and 8 Daily Newspapers Sue OpenAI and Microsoft Over A.I. - The New York Times).

Environmental costs are perhaps the biggest concern raised by third sector organisations. The generative AI supply chain has a large environmental cost, including growing electricity demand and water consumption for data centres. It can be difficult to know the true environmental cost, as most AI companies are not transparent about their carbon footprints or energy demands. At the same time, AI is being used to tackle environmental challenges. For organisations working on climate justice or environmental protection, this creates genuine tension. Resources like Hugging Face's AI Environmental Primer, AI has an environmental problem. Here’s what the world can do about that, and Explained: Generative AI’s environmental impact | MIT News | Massachusetts Institute of Technology offer a helpful background. 

Labour exploitation is another unavoidable risk with current mainstream genAI tools. AI isn't as automated as it appears. Behind every ‘intelligent’ system are human workers, often in global majority countries, labelling data and reviewing AI outputs. These jobs frequently involve exposure to disturbing content, for example moderating hate speech, and are typically low-paid, with insufficient psychological support. There's also very little transparency about how many people are employed this way, or what protections they have, making it difficult for organisations to make informed ethical choices about the supply chains they're participating in.

Ultimately, you can't eliminate system-level risks through how you use AI. This is why using AI must be a values-led decision, and not just a risk-mitigation exercise. There are ways to reduce harm and push for change. This might include advocating for greater transparency from tech companies about environmental costs and labour practices, choosing tools from providers with better track records where possible, exploring open-source alternatives, or opting for smaller language models that require less computational power. These actions won't resolve the fundamental issues, but they can reduce your contribution to the harms and signal demand for more ethical AI development.

The bottom line

Generative AI can be a valuable tool, helping you work more efficiently, reach more people, and focus resources where they matter most. But it comes with genuine environmental and social costs that can't be ignored.

Use-level risks are within your control. These include hallucinations, bias in specific outputs, data privacy breaches, or reputational damage from AI-generated content. To counter these risks, employ tactics like keeping humans in the loop, considering your providers, never entering personal data, and conducting controlled experiments before broad adoption.

System-level risks must be carefully weighed. The copyright infringement that underlies training data, the environmental costs of AI infrastructure, and the labour exploitation in the AI supply chain aren't problems you can solve through organisational policy.

There's no single 'responsible' answer that works for every organisation. Some organisations may decide that the potential benefits justify engaging with AI tools, while minimising harm and advocating for change. Others may conclude that the system-level costs are incompatible with their values and choose not to use them. Both decisions can be equally valid and responsible, depending on your mission and the communities you serve. What matters most is that the choice is yours to make, informed by a clear understanding of what these tools actually are, how they work, and what their use really involves.

AI Transparency statement

The first draft of this article was drafted by a human. Claude Sonnet 4.5 was used to suggest edits to improve clarity for a non-technical audience, which were then reviewed and selectively incorporated.

More AI Resources

Thank you to the Insight Infrastructure programme for their support. Developed by the Joseph Rowntree Foundation, it aims to democratise access to high-quality quantitative and qualitative data and evidence through open collaboration and innovation, to help tackle injustice and inequality in the UK.

Next
Next

How does data support good governance?