AI

Embedding an AI assistant inside software people actually use

HCOMS April 2026 9 min read

Most of the AI advice aimed at charities and dioceses is either breathless ("AI will transform your ministry!") or nervously dismissive ("we shouldn’t touch it"). Neither helps the diocesan secretary deciding whether to spend £20 a month on ChatGPT for the office. So here’s the version we actually give clients.

There are roughly five places generative AI saves real time in a third-sector office, and four where it’s either useless or risky. The trick is knowing which is which.

Where AI helps

1. Drafting and rewriting correspondence

The single biggest win. Drafting a tactful reply to a difficult complaint, rewriting a parish notice for clarity, summarising a 14-page report into a one-pager for trustees. ChatGPT, Claude, or Microsoft Copilot will do any of these well in seconds, and a human still gets the final say.

Rule: AI drafts, humans approve. Never let AI-generated text leave the building without a person reading it word for word.

2. Meeting notes and action lists

Microsoft Teams Premium, Otter.ai and Fireflies all transcribe meetings, summarise them, and pull out action items. For a diocesan office that runs three committees a week, that’s a couple of hours a week of admin disappearing.

Watch-out: safeguarding meetings should never be transcribed by a third-party service. Different category of data, different risk profile. Keep those in-room and on paper.

3. Translation

For multi-language parishes — Polish in some Norfolk benefices, Tamil in some London ones — the modern translation models are far better than Google Translate was five years ago. A bulletin can be in three languages without anyone spending three hours on it.

4. Searching across your own documents

This is the genuine sleeper application. Tools like NotebookLM and Microsoft Copilot can answer questions like "what does our last safeguarding policy review say about under-16 confirmations?" by reading every PDF and Word document you point it at. For an organisation with twenty-five years of accumulated policy documents, that’s genuinely transformative.

Caveat: only point it at documents that are already meant to be read by anyone in the office. Don’t feed personal HR records or open safeguarding cases to a search tool.

5. First-pass research

"Summarise the latest Charity Commission guidance on social media." "What do other dioceses do about clergy housing tax?" AI gives you a quick read of the landscape; you verify the specifics with primary sources before acting.

Where AI doesn’t help (yet)

1. Anything with personal data you wouldn’t put on a postcard

Safeguarding referrals. Pastoral conversations. DBS forms. Medical information. The free tier of ChatGPT trains on your prompts unless you switch it off; even when paid, the data leaves your control. For sensitive personal data, either use a tool with explicit no-training contracts (Microsoft 365 Copilot, Anthropic on a paid API plan) or don’t use AI at all.

2. Numerical accuracy

Models still hallucinate numbers regularly. Annual accounts. Service attendance figures. Risk register percentages. Budget variances. If accuracy matters, run the numbers in a spreadsheet and use AI only to write the prose around them.

3. Anything that needs a citation

Models invent citations. Confidently. The footnote will look real, the URL will look real, the journal will look real, and none of it will exist. For statutory reporting or anything that’ll be cited back, AI is a starting point at best.

4. Replacing a human relationship

Nobody wants a chatbot when they’ve just lost a parent. Nobody wants automated condolences. Some categories of communication require a human in the loop, full stop, and trying to AI-optimise them backfires badly.

The boring practical recommendations

Where this is going

By 2027 most diocesan office software will have an AI assistant baked in — ours included. The interesting question is no longer whether to use AI, but which uses to permit. A clear policy now means you don’t end up scrambling to retro-fit one when the inevitable safeguarding question lands.

If you’d like a hand drafting that policy, or thinking about which AI tools are genuinely safe for your sector, drop us a line. We’ve been doing this conversation a lot recently.

Related notes