Knowledge Hub

Risk IT, technology and digital Governance, legal and compliance

Navigating AI in 2025: Data protection considerations for charities deploying AI technologies

Bates Wells’ Head of Data and Privacy, Eleanor Duhs, shares six top tips for charities to consider when deploying or using AI in the year ahead, including key questions to ask developers about AI technologies.

 

At a recent panel event organised by Bates Wells, we discussed some of the key trends and themes we’re seeing when working with clients, and in the market more generally, around AI technologies.

While the AI landscape continues to develop rapidly, we’ve summarised some of the key considerations for charities planning to deploy AI over the coming year.

1. In the UK, there is a patchwork of legislation that applies to AI, from copyright to human rights and data protection. But we are expecting to see new legislative frameworks in 2025, as mentioned in the King’s Speech to “establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models.”

In the EU, the EU AI Act has now come into force. Most of the obligations will become binding in August 2026. During this lead-in period, it’s important to start to make preparations to ensure you will be compliant if its provisions apply to your activities.


2. Personal data is at the core of AI and careful consideration should be taken before deploying new systems. As described by Elizabeth Denham CBE, Former Information Commissioner, “the energy powering… new technologies is our data”. As an organisation, you should be considering:

- Are you a controller or processor? This will determine your obligations under data protection law.

- Transparency – do you understand how the technology works? Be ahead of the game and prepared to explain it.

- Fairness and accountability – are you deploying systems in a way that is fair and lawful?

- Automated decision-making – will algorithms be able to make decisions in relation to human beings? If so, you will need to ensure that you have appropriate safeguards in place.

- Are the outputs accurate? Is the system secure?


3. Key questions to ask the developers of your AI technologies are:

- Will the developer of the AI system support you with explaining the system, so that you can meet your accountability obligations?

- What data protection considerations are inbuilt in the system already?


4. Manage expectations. If your board is looking to deploy an AI system, it will take time and resource to conduct the relevant analysis. The model will need ongoing monitoring to ensure that it is working accurately and in compliance with the law.

5. Conducting a Data Protection Impact Assessment is critical. This document needs to evidence your careful analysis of the system and why it is fair and proportionate to use it. Further, if anything goes wrong the regulator may want to see this assessment.

6. The ICO’s AI and data protection risk toolkit is an extremely useful resource. The toolkit is a helpful practical guide about what you need to do to ensure accountability, fairness and transparency and proper governance for your AI system.

If you have any questions around how you can prepare to use AI, or are interested in reviewing your data protection policies and procedures around AI, please do get in touch.


« Back to the Knowledge Hub