As AI technologies evolve and employees experiment, organisations should look to set out their AI strategy. Sarah Belsham from RSM UK explains how.
As the dust still settles from the launch of ChatGPT, how can charities respond to the hype in a structured and controlled way?
Many have been experimenting with generative artificial intelligence (AI), particularly in search of productivity gains and with the goal of handing over mundane and repetitive tasks to a machine.
Things like drafting emails and reports, managing calendars, and disseminating online information, are all now possible. It even goes as far as writing material for marketing campaigns and social media content.
It can save valuable time, but its use still requires human intervention to validate the results and to ensure messaging and tone of voice align with core values and overall strategy of the charity.
Governance and risk
Turning experimentation into tangible long-standing outcomes requires a joined up and purpose-led approach defined through an AI strategy that considers not just generative AI but AI in the round.
That doesn’t mean spending months discussing and deliberating over who, what, how and when in relation to AI, but rather taking a holistic and honest view about the opportunities and expected outcomes of AI and the readiness to achieve those outcomes.
A charities AI strategy should set out your vision for AI underpinned by concrete use cases and recognising the need for reliable data. Data that is available, well-structured, of good quality and free from bias.
If your use cases are based on publicly available data rather than your own data, your strategy should recognise the limitations and possible biases of using data that is not within your own control.
Regardless of whether you plan to leverage publicly available data or your own data, there are two fundamental aspects that need to be considered as part of a charities AI strategy and they relate to people and governance.
Governance is arguably an umbrella term used to encompass many things from the adherence to regulatory requirements to the introduction of internal policies in relation to appropriate use of data and AI tools. Either way, it’s important to make sure you have guardrails in place to allow continued experimentation without risk.
Investing in people
Encouraging experimentation stems from taking a people-centric approach to AI and digital transformation more generally.
Consider the capabilities needed to deliver your AI vision and identify opportunities for educating and upskilling your workforce.
At the very least, you must ensure your people are equipped to validate and challenge the outputs of your AI and to feel comfortable and empowered to embrace new ways of working.
Value from AI investment will only be realised if solutions are used and embraced by everyone.
Whilst the hype isn’t going to go away it must now be replaced by concrete plans, subtle changes to ways of working, and most importantly robust governance and ethics.
Ethics that challenge you to consider whether AI is an appropriate solution, whether you can explain the inputs and outputs of your AI, and whether you are using AI in a fair and harm-free manner.
I will be speaking at CFG’s Annual Conference on 27 June at a session entitled 'AI: a powerful tool for charities'. It would be great to see you there!
In the meantime, and if you would like to discuss your charity's AI strategy or any guidance in this area, please email me.