AI-assisted coding: an honest take after a year of using it properly
Most AI procurement happens in the wrong order. Someone in marketing tries a tool, likes it, gets a corporate licence, and the data-governance question gets asked nine months later when the auditor turns up. By which point a lot of personal data has crossed a lot of borders.
This is the short version of the questionnaire we send to clients before they sign with any AI vendor. Five questions. Each one has an answer that should be a deal-breaker.
1. Where does the data go?
Specifically: in which countries are the model’s servers, and where is the data stored at rest? "The cloud" is not an answer. "AWS US-East-1" is.
Deal-breaker: any vendor who can’t name the regions, or who reserves the right to move data between regions without notice. Under UK GDPR you need to know.
Acceptable: a contract that pins your tenant to UK or EU regions, with a written commitment that the AI provider won’t move it.
2. Does my data train future models?
Three answers a vendor might give:
- "Yes, by default, but you can opt out." — The free tier of ChatGPT, Gemini, Grok. Treat as no-go for any business data.
- "No, we don’t train on customer data." — Microsoft 365 Copilot, Anthropic Claude on the API, OpenAI’s Team and Enterprise tiers. The default to look for.
- "We delete prompts after 30 days unless you flag them as feedback." — Reasonable belt-and-braces. Make sure it’s in writing.
Deal-breaker: a vendor who can’t produce a written contractual no-training clause.
3. Who in your company can see my data?
Inside the AI vendor itself. Are prompts and responses visible to support staff? To engineers debugging issues? To trust-and-safety teams flagging potential abuse?
The honest answer is "yes, sometimes, with controls". The right answer to look for is a published policy on internal access, with logging, role-based gating, and a defined retention period.
Deal-breaker: any vendor who claims "no human ever sees customer data" — that’s either dishonest or so over-engineered as to be operationally fragile.
4. What’s the audit trail?
For each query: who asked, what they asked, what came back, what action was taken as a result. Can you export it? Can you keep it for as long as your retention policy requires?
The Information Commissioner’s Office is increasingly clear that automated decision-making needs to be auditable in the same way human decision-making is. If your AI vendor can’t produce a per-tenant audit log, you can’t defend the system to a regulator.
Deal-breaker: "audit logs are available on request from our team within 5 business days." That’s not an audit log; that’s a hostage situation.
5. What happens when you’re acquired or you go bust?
An overlooked one. AI startups burn cash; many won’t exist in three years. If your AI vendor is bought by a US-based competitor, what happens to your tenant data? If they go bankrupt, who’s the data controller of last resort?
Look for a contractual data-portability clause: on termination (whether you choose it or the vendor disappears), you get your data back in machine-readable form within a defined window, and the vendor commits to deleting their copies thereafter.
Deal-breaker: standard SaaS terms with no exit clause. You’ll regret it.
The shorter version, on a Post-it
If you want to remember just one thing, it’s this: treat any AI vendor as you’d treat a payroll provider. Same level of due diligence, same contractual rigour, same expectation that they’ll defend your data to the same standard you would.
Most aren’t set up to. The big ones (Microsoft, Anthropic, Google’s enterprise tier) increasingly are. Smaller vendors built on top of those tend to inherit the protections, but you have to read the contract chain.
Why this matters in 2026
The Data (Use and Access) Act 2025 raised the stakes on automated decision-making in the UK. The first ICO enforcement actions specifically about AI use are expected this year. Sectors that already had to be careful (charities, dioceses, healthcare, financial services) are about to find out that AI procurement is a regulated activity, not a productivity tool.
Get the questions in writing now. The vendors who can answer well are the ones who’ll still be in business when the enforcement starts.
If you’d like a hand running this checklist on a vendor you’re evaluating, we’re happy to look. We do this conversation a lot.