Knowledge Hub

IT, technology and digital Governance, legal and compliance Environmental, social, governance (ESG)

How will your charity harness the power and potential of AI?

Generative AI presents an enormous opportunity to do more good with less. But what are the ethical considerations for charities? Emma Abbott and Clare Mills take a closer look.



It feels as if the words artificial intelligence (AI), are cropping up in every discussion, news article and professional forum. It’s everywhere; we can’t avoid it. But what is it? Or what is it not?

As Eleonor Duhs comments here, there's no consensus on one definition. However, at its simplest, it’s a system or tools designed to simulate and exhibit human characteristics and the type of AI we are most commonly referring to and using now is called generative AI.

It’s so-called because it generates something new from other things. It combines or transforms existing things, drawing data together. Its output isn’t necessarily the answer, it’s an answer.

This is important because, although thinking about AI in this way can help us set aside sci-fi images of machine consciousness and enslaved humans, we still need to consider the ethics of its use. Urgently.


Thinking about values and ethics

At CFG, we’ve been looking at AI from different perspectives, as a spectrum of issues, and we’re starting to develop a framework for thinking and policy. This is crucial because AI is not only advancing fast, it’s also being experimented with.

The chances are that your staff and volunteers are using it in some way – perhaps unknowingly! – and so too are your beneficiaries, clients and customers. And we also know that regulation lags woefully far behind the pace of technological innovation and adoption.

It’s vital that we create a framework that has our values as an organisation – and a sector – at the heart of it. To share something said at a recent CharityComms seminar, technology is often harnessed by those with poor understanding of the issues, or unethical intentions.

Therefore, using AI could lock in a bad set of values for the long term, such as bias against people from different backgrounds or cultures, as discussed below. “The charity sector can’t be caught snoozing”.

As AI starts to have an impact on job roles and profiles, it’s even more crucial that we understand what it can and can’t do for us – and what the risks are.

For example, AI could be used to support the recruitment process, by sourcing, sifting and screening talent.

It can supplement bid writing, taking the pain out of drafting lengthy documentation.

It can generate campaign materials and messaging, and test how it’s landed with different audiences.

The possibilities are endless – and not yet fully understood. Today, we might ask job applicants to not use AI, but how long will it be before we're recruiting people who have a strong grip on its use?

Human resources: will AI replace us?

This is the big question, and we’ve already seen headlines of organisations ushering in a new era of AI. BT is cutting around 55,000 jobs and expects AI to replace around 10,000 of them.

IKEA is taking a different approach (judging by the headlines, which count for a lot when considering brand and reputation!).

The company is retraining call centre workers as virtual designers, rather than switching to AI and making redundancies. It’s a business gamble perhaps, but this speaks volumes about the company’s intentions and values.

One of the most powerful takeaways for us in recent weeks is that AI can (should?) be used to replace tasks, not roles. Charity leaders may soon have to make some difficult choices on how it deploys AI and how that impacts jobs. So, align your values and choices and communicate them well.


A question of bias

Being truly inclusive is important for us at CFG. We’re not only mindful of potential bias when we recruit, but also when we communicate our stories and those of our communities.

We’re not unique in this and many charitable organisations proactively challenge and reduce bias.

This is why the sector must approach AI with caution: generative AI can only give us what it has already been given.

Or put another way: bias in, bias out. Research shows time and again that there’s still a long way to go in creating a truly unbiased AI system.

Be mindful that the AI tools you use could be presenting a narrow world view that bakes in bias or simply distorts reality. Whether we’re recruiting staff or creating content with AI, we must check and test for bias.

And yes, there is a role here for our sector to continue pressing for better and doing better.

 

Sources and transparency

At CFG we’re not currently using generative AI to create content and shortcut human interaction, but we’re keeping an open mind.

We’ve dabbled with the free tools, such as ChatGPT, and have been relatively impressed with the content it generated. But! It undoubtedly needs an expert eye and we can't always rely on it to be current.

We like the look of Claude, not only because it can process so much more contextual information than other tools, but because it is underpinned by its constitution, which was based on the United Nations Universal Declaration of Human Rights. This article from Forbes is worth a read if you want to delve much deeper.

We must continue to be transparent and rigorous in the creation and sharing of information and data. That means knowing and citing our sources, including checking any found through the use of AI.

If we can’t do that, we open ourselves up to a range of problems and the burden of reputational damage may fall on individuals, not only the Board or organisation.

This is where professional networks – such as CFG’s membership community – will remain a vital source of knowledge and expertise, and be even more valued in any future battles against distortion and disinformation.

 

A brighter future

Ultimately, AI will revolutionise how we understand, share and manage data and information. It will change how we create content, and it will feed into our everyday communications and workflows.

Harnessing it in the right way can help us do so much more, for so many more. This is truly exciting.

But it will only be for the good if we recognise inherent risks and limitations, and build a framework to manage them. We not only owe this to our staff and volunteers, but also to our beneficiaries and those communities we serve.

And by being more intentional in our use of AI, we can pass mundane and repetitive tasks over to the machine and free up time to be truly creative and transformative. If we do it right, AI can help us to be the change we want to see in the world!

This article first appeared in RBC Brewin Dolphin's Charity Perspective magazine, Summer 2023

Eleonor Duhs from Bates Wells shares an overview on AI and the wider policy framework


« Back to the Knowledge Hub