The AI-lephant in the room
It's time to talk about the elephant in the room. As fundraisers, we work for the benefit of some of the most disadvantaged people in society. We work in the sector because we want to make a positive impact on society, so it stands to reason that the approaches we take shouldn't serve to oppress a different group of people (and we're not referring to rebalancing existing power structures here).
Lately, there have been some articles that are starting to show the cracks in large-scale black-box AI algorithms and the scapegoat mentality that can come when individuals or organisations don't want to be held to account for their bad behaviour.
A recent instance of AI as a scapegoat is Channel Nine's apology to MP Georgie Purcell, claiming their digital altering of her image was the result of some rogue Photoshop AI that just happened to sexualise a fairly ordinary photo. This demonstrates one of the many challenges with the proliferation of AI: humans will blame the algorithm, hoping to avoid formal responsibility.
More recently, Air Canada tried to argue its way out of a bereavement refund that its chatbot had advised of, arguing that the chatbot was a separate legal entity responsible for its own actions.
They were found liable for their chatbot and were required to provide an appropriate refund to the customer. But not before acting in bad faith and trying to absolve themselves of responsibility.
If we apply this to our fundraising, how long until Supporter Care fields an uncomfortable call from a supporter who wants to know why they were asked for $5,000 when the most significant gift they've given is $200, or will we see chatbot Supporter Care fabricate information about our causes all in an effort to save time and money. Will the donor be satisfied with "The algorithm said to..." or "The chatbot is responsible for its own actions"? I suspect they'll be getting in touch with A Current Affair to tell their story. This type of unexplainable black box AI has the potential to erode trust among our supporters and the broader public due to our inability to answer 'why'.
Another aspect of the emerging AI tools that may prove helpful to charities is the Large Language Models (LLM) that generate content at a rapid pace. What was once a budget item for copywriting or a few hours of our time can now be a well-constructed prompt to Open AI's ChatGPT. While this looks like a great time and money saver, it introduces some ethical questions about the content used to build the models and Open AI's right to use it. Open AI recently confirmed that models like this wouldn't be possible without large amounts of prose, including copyrighted material.
It certainly appears that Open AI is comfortable and willing to exploit authors, journalists, writers, bloggers, media companies and publishers to pursue the development of their LLMs.
With the modern slavery policies that many organisations have and the expectation that there won't be any form of slavery in their supply chain, this exploitation by Open AI should raise questions and eyebrows in charity boardrooms across Australia about whether any of their suppliers are incorporating these tools into their supply chain.
And so, adding to the existing concerns about exploitation, we should also question the quality, equity, and suitability of the content used to train the LLMs being created and used. Just last week, Reddit announced a $60M deal with Google, allowing them to use the conversations on Reddit as training data.
- https://www.theregister.com/2024/02/20/reddit_content_ai_deal/
- https://www.theregister.com/2024/02/22/reddit_google_license_ipo_altman/
It's been a while since I've spent considerable time on Reddit, but it certainly isn't known as the friendliest or most welcoming place on the Internet. Imagine an LLM built on conversations where trolls hijack the discussion and how that training may affect a chatbot's interaction with a supporter.
The quality of training data then leads to the last thing for us to think about. The output of AI and Machine Learning models is only ever as good as the inputs. When we start to apply this to how AI & ML are often used in Fundraising, we run the risk that most models will be built on targeting created with human intellect. There may be some refinements to improve the efficiency, but more often than not, new training data isn't likely to be introduced. This leads to a model that produces more efficient human targeting that, over time, will result in a shrinking donor base. This is because the model strips out the existing less responsive donors, who will only become less responsive as time progresses.
Some of our clients working with AI suppliers in Fundraising are questioning whether the perceived benefits outweigh the risks. Can they sustain the bump in performance over the longer term, or is it a short-term tactic to use intermittently to boost the effects of a good-quality fundraising strategy that has been executed well?
Practical Tips
- Ask your AI & ML suppliers (internal or external) where the training data for their models comes from. Make sure to confirm for all the models they use and produce, not just those you're investing in.
- Confirm that the supplier doesn't use models trained on unethically sourced data and that they have permission to use the data for that purpose.
- Ask the team to explain why a particular outcome occurs. If they can't explain it in a way that would satisfy a supporter, there may be better approaches to use with your supporters.
- When targeting donors with AI & ML, you should also include supporters that align with your strategy. For example, if you've recently invested in donor acquisition, include the new donors the model excluded. It's the only way they'll have the opportunity to make a second gift.
There is some great potential in well-considered AI & ML applications. However, we believe the current approaches need more considered thought for the sector we work in to ensure we don't build in bias that works against what we stand for as a sector.