How to Balance Innovation and Responsibility

By Nicole Holgate, Communications Manager at DataKind UK and Carol Hounsell, Public Health Intelligence Specialist

Exploring AI for Social Good at Big Data & AI World, part of Tech Show London 2026.


Since 2013, DataKind UK has been applying cutting-edge data science approaches to the third sector’s challenges, helping social good organisations to improve their decision making, problem solving, and working practices, while centring their core value of responsible data use. DataKind UK’s approach is to meet organisations where they are, expanding their capacity with the expertise of their volunteers. Volunteers donate their professional expertise to understand data collection, strengthen data privacy, and develop infrastructure.

As AI accelerates the tech landscape rapidly, how can the third sector ensure that responsible use keeps up? Can organisations that deal with data about vulnerable people and crucial societal problems ensure they have ethical foundations in place?

The panel from left to right: Avelin, Tosin Oyeladun, Jesubukade Ajakaye, and Kingsley Okonkwo

Our Big Data & AI World panel brought together data professionals from a wide range of backgrounds. It was chaired by Avelin, Data Analyst at Cardiff University. She spoke to Tosin Oyeladun, Data & Performance Analyst at Southend-on-Sea Council; consultant Kingsley Okonkwo; and data science intern Jesubukade Ajakaye.

The third sector is dominated by smaller organisations that can achieve a great deal with very few resources. As much of their work involves sensitive data and vulnerable people, safeguarding and ethical data use are central. But a large proportion of the sector has adopted some form of AI in the past couple of years, often without dedicated resources.

The panel began by agreeing that accountability, transparency, and an ethical infrastructure have to sit underneath any adoption of new tech, especially when data is involved. Ethical considerations must always sit within governance structures, and every use case requires its own assessment.

Good governance can also help to overcome systemic errors and tackle embedded biases. Third sector organisations need to be clear on who has access to data, whether it is clean enough to be used for decision-making, and who is liable for any outcomes.

Organisations can also examine how their data is collected, and use that to reduce bias. It is essential to work closely with user groups to avoid exclusion and data inaccuracy. One organisation adapted its pathways for gathering data when they found that a group of service users had very low digital literacy, and relied on their children to fill in online forms.

Tackling data literacy at an organisational level is also vital. Third sector teams are often aware that data has huge potential, such as using predictive approaches to help with resources and planning. AI can also offer valuable support where capacity is low, but is not a single source of truth. A good approach is to support and develop data champions within each part of the organisation, who can advocate for appropriate and relevant data use.

To successfully and safely adopt AI, the panel recommended that organisations start with administrative and repetitive functions, and always keep a human in the loop to ensure reliable outputs. Privacy and security are essential, as are clear guidelines for any uses of AI. Guardrails to embrace AI could involve updated security, additional consent information, and no extraction of data into other systems.The panel encouraged the sector as a whole to start conversations, share resources, and not shy away from speaking to people with differing views about data and AI use.

When it comes to testing and innovation, they agreed that it’s crucial to co-create everything with your audience, and involve your service users throughout any design processes. The only way to solve people’s needs is to embed them in building throughout the project. When stakeholders are included in the design process, the products are tested against real community needs. Feedback is vital to understand if an approach or tool is relevant and effective, helping the people it is intended to.

As a final acknowledgement of the extremely thorny environmental side of AI use, the panel highlighted our responsibility to only use AI when it is truly necessary. Regulations, and perhaps even quotas for usage, can help balance AI’s benefits with its environmental and social impact.

Huge thanks to all of the volunteers involved in making this a fantastic event!


Get some ‘data for good’ news in your inbox!

Sign up to our monthly-ish newsletters to hear about DataKind UK’s programmes and services, as well as other inspiring projects, job roles, events, and all things data.
If you are more interested in volunteering, take a look at our volunteering page.
Next
Next

Why Tech Show London 2026 is putting human questions at the centre of the tech conversation