Can Chat GPT ever be ethical?
Our panel discusses whether third sector organisations can ever reconcile the ethical pitfalls of using Generative AI for their work - or what ways we can move forward.
Resources mentioned in the webinar
This list will not be updated, but we may share more in future articles.
More from our AI series
Q&A
We weren’t able to get to all of the audience questions in time, so the panel have kindly provided some written answers.
Big Tech AI that helps with accessibility might not be accessible to global majority contexts. Apart from the digital divide, what about language? Some automatic transcription apps are terrible with different accents, for example. Is it really helping with accessibility, or is it making the difference between the global majority and minority that much wider?
Robin - This is a valid concern as, to date, recognition is only really useful when the speaker is clear and without significant accent in one of the core languages (e.g. English US, English UK, English CA, Central/North-American Spanish etc). However, significant work is being done on models that perform better with non-standard accents and speech patterns (stammer, cerebral palsy, etc) – and to that end a new standard has been developed to help: Code of Practice for Designing Voice-Based Systems for Vocal Accessibility.
Ability Net has a session on this at their TechShare Pro conference 2025.
What are the benefits of spending time on an AI policy vs allowing your team to experiment with AI, especially as a small organisation? Should we have a blanket ban on use until we have a policy in place?
Alice - You can do both, that is, develop a policy by playing and learning. I would recommend:
Clear guardrails (human in the loop, transparency, security, use CAST resources)
An organisational logon people can use rather than their own
An initial conversation about how people feel and what they need
Then allow people to play with it so they can learn how it works, and where it is and isn’t useful, before finalising any policy. The Trusts I work with that have the most mature approaches seem to do that!
Is bias in AI something to be concerned about, and if so, what are your recommendations on managing it?
Alice - I agree there is bias (MIT info here) as we discussed on the call, so it depends on how you plan to use AI as to how you could manage it. A human in the loop is vital, but also consider running an Equality Impact Assessment and Data Protection Impact Assessment to evaluate how such bias may harm your users, and only use a tool when it passes.
Is it worth including transparency notices where AI has been used?
Alice - I would highly recommend highlighting where AI has been used in generating content, beyond spellcheck etc, for transparency and public trust. It may be as simple as writing ‘Image generated by CoPilot’, tagging it as AI on social media where that is possible, or a more detailed description. For example, when I share meeting notes generated by CoPilot, I tend to say ‘Meeting notes generated by CoPilot (AI) and edited by Alice’ or similar. And I tell people before meetings that I plan on using AI, and get consent. There are some more complex licenses around that show whether AI has or has not been used, but none is currently the front runner!
It feels like in future we may be obliged to get involved in an AI arms race with bad actors. What are the panel’s thoughts or experiences on using AI to track and respond to online misinformation, or abuse and hate targeting our sector?
Alice - A hard question to answer! AI can be used to track patterns around sentiment and information about your organisation, but may also be the source of the issue! I worry we won’t win as charities against bad actors using AI vs AI, as they are not bound by the ethics and values we have, so will ‘play dirty’. Our own credibility and trust remains vital for public trust.
I wonder whether we need shared monitoring across civil society, similar to ‘threat intelligence sharing’ in cybersecurity: clear disclosure of when we use it for comms to keep trust; human in the loop again; and also charities advocating for stronger legislation impacting on major platforms to detect and mitigate abuse and misinformation targeting charities and vulnerable groups. It’s worth having guidance for staff on dealing with online abuse and trolling anyway. We do have some, and in a lot of cases we block and ignore when we are aware it is bad faith.
Is there a resource about the environmental impact of AI technologies?
Alice - This is also something I am asked a lot, and honestly, it’s highly opaque! I have a lot of links on this:
CodeCarbon, n.d. CodeCarbon: Open Source Tool to Track CO₂ Emissions. [online] Available at: https://codecarbon.io/ [Accessed 8 Jan. 2025]. Python package
Lacoste, A., Luccioni, A., Schmidt, V. and Dandres, T., 2019. Quantifying the Carbon Emissions of Machine Learning. [online] Available at: https://mlco2.github.io/impact/ [Accessed 8 Jan. 2025]. Python package
Microsoft, n.d. Emissions Impact Dashboard for Azure. [online] Available at: https://news.climate.columbia.edu/2023/06/09/ais-growing-carbon-footprint/ [Accessed 8 Jan. 2025]. Microsoft
Deloitte, n.d. AI Carbon Footprint Calculator. [online] Available at: https://www.deloitte.com/uk/en/services/consulting/content/ai-carbon-footprint-calculator.html [Accessed 8 Jan. 2025]. Any but doesn’t quantify into CO2 metric, just high/ medium/ low
Communications of the ACM, 2023. The Carbon Footprint of Artificial Intelligence. [online] Available at: https://cacm.acm.org/news/the-carbon-footprint-of-artificial-intelligence/ [Accessed 8 Jan. 2025]. –article on tracking
General data use (it isn’t just AI!)
Guidance
ACL Digital - Eco-Friendly Digital Design - Insights into how sustainable user experience (UX) design can minimise energy consumption.
AI's Carbon Footprint is Bigger Than You Think An exploration of the often-overlooked environmental impact of generative AI models.
AI's Growing Carbon Footprint - An article discussing the significant energy consumption and carbon emissions associated with training large AI models.
AWS Carbon Footprint Tool - Amazon Web Services’ tool for estimating the carbon impact of cloud services compared to traditional hosting.
B Corp Climate Collective: Digital Carbon Footprint - Guidance on measuring and reducing digital emissions within businesses, including digital carbon footprint assessments.
The Carbon Footprint of Machine Learning Training - Discusses how machine learning training carbon emissions are likely to plateau and shrink in the future.
Cloud Carbon Footprint - An open source tool to measure and analyze cloud carbon emissions.
Designing for Sustainability – W3C - A guide from the World Wide Web Consortium (W3C) on sustainable web design principles, balancing performance, accessibility, and energy efficiency.
Digital Beacon A tool that helps analyze the sustainability of digital products and services.
Digital Inclusion & Environmental Sustainability – Good Things Foundation - Research and tools on how digital inclusion efforts can be aligned with environmental sustainability, particularly for community-focused organisations.
Digital Sustainability - Climate Action Tech - A community-led initiative providing guides on designing user-friendly, sustainable digital products.
Ecograder - Assesses websites' environmental impact and offers suggestions for improvement.
Estimating the Carbon Footprint of BLOOM - Analysis of the carbon footprint for training BLOOM, a large language model with 176 billion parameters.
Green Web Foundation – Hosting Carbon Ratings - A searchable directory of web hosting providers with sustainability ratings.
Microsoft Sustainability Calculator - A tool for estimating and reducing carbon emissions from Microsoft Azure cloud services.
Shrinking Deep Learning's Carbon Footprint - Research on innovations aimed at reducing the environmental costs of deep learning through improved software and hardware.
Training a Single AI Model and its Carbon Impact - Explores how training a single AI model can emit as much carbon as five cars during their lifetimes.
Your panel
Robin Christopherson
A leading expert on accessibility and digital inclusion, Robin Christopherson of UK tech charity AbilityNet is a regular speaker raising awareness of the power and potential of technology to transform people's lives. Being blind himself, he is a passionate supporter of products that are inclusive and also easier to use by all. His work was recognised with the award of an MBE in the 2017 Queen's new-year honour’s list, as well as recently receiving an honorary doctorate from the University of Suffolk. He also featured in the UN's ‘World's top 100 most influential people in digital government’ list in 2019 - voted by over 500 organisations including governments and global NGOs.
Alice Kershaw
Alice Kershaw likes to work on partnership projects which include a mix of culture, tech and values themes. She has been Head of Digital Transformation at the Royal Society of Wildlife Trusts since October 2021, and leads the federation’s ambitious digital programme supporting its collective 2030 strategy to Bring Nature Back. She supports the federation of 47 individual grassroots charities to make the most of digital, through strategic thinking and development and delivery of new services, including the about to launch data service. A specialist in change management, service design, and digital transformation within the charity and third sector, she has extensive experience spanning business analysis, process reengineering, and Agile. She has also advised the Dovetail Network, mentors students from underserved backgrounds who want to get into digital, and serves as a trustee of the National Biodiversity Network Trust. In her spare time she enjoys running in the mud whilst eating snacks in lumpy landscapes.
David Nolan
David is an Investigative researcher within the Algorithmic Accountability Lab at Amnesty Tech. He works at the intersection of technology, AI and human rights across research, policy and advocacy. His research focuses on investigating the deployment of AI and algorithmic systems by public sector agencies and Big Tech.
Moderated by our Head of Data Science Dulcie Vousden
Dulcie leads the development and delivery of DataKind UK's pro bono data programmes, providing technical oversight to ensure solutions are impactful and that data is used responsibly. Dulcie also leads conversations around responsible data science/AI use in the social sector. Before joining DataKind UK, Dulcie completed a PhD in medical biophysics at the University of Toronto and was a neuroscience researcher at UCL. She loves helping organisations use data and evidence to understand what works. In her free time she enjoys cooking and spending time with her two small children.
Thank you to the Insight Infrastructure programme for their support. Developed by the Joseph Rowntree Foundation, it aims to democratise access to high-quality quantitative and qualitative data and evidence through open collaboration and innovation, to help tackle injustice and inequality in the UK.