AI for charities, dioceses and the third sector: where it actually helps
Ryan Dahl is one of the more interesting people to listen to on AI right now. He created Node.js in 2009, regretted enough of it to spend the next decade building Deno as a corrective, and over the last couple of years has been making a contrarian case that surprises a lot of people: JavaScript — not Python — will be the language of AI agents.
That’s a striking claim because Python has been the assumed default for everything AI for the better part of a decade. PyTorch, transformers, every research paper, every Jupyter notebook. So it’s worth understanding why someone with Dahl’s track record thinks the assumption is wrong.
The actual argument
Dahl’s case isn’t about training models. He’ll happily concede Python wins for that. His argument is about the runtime that AI agents will live in — the layer that actually executes the code an LLM produces, calls the tools, talks to the network, and runs in production.
For that runtime, he argues, you want four things: ubiquity, a tight security model, native async, and web-standard APIs. JavaScript has all four. Python has roughly none.
1. Ubiquity
JavaScript runs in every browser, on every server, in every edge worker. When an AI agent wants to do anything — render a UI, fetch a URL, parse JSON, talk to a database — the language with the most existing libraries, the most StackOverflow answers, and the most training data in the model’s pretraining is JavaScript. Models write better JavaScript than Python because they’ve seen vastly more of it.
2. Security model
This is the bit Dahl gets most exercised about. If your AI agent generates a function call that contains a typo or a malicious instruction, what stops it from deleting your filesystem?
In Node.js, nothing does. In Python, nothing does. In Deno — the runtime Dahl built — the answer is “you didn’t pass --allow-write, so it can’t.” Deno was designed in 2018 to be secure-by-default. Eight years later, that design choice turns out to be exactly what AI agents need.
You can run an AI-generated script and tell the runtime: this code can read these specific files, talk to this one API, and nothing else. It’s the closest thing the industry has to a sandbox-by-default model for code that you didn’t personally write.
3. Native async
AI agents are I/O-bound by nature. They wait on model APIs, on tool calls, on databases, on each other. JavaScript’s event loop and async-await primitives are the cleanest way to express that pattern in any mainstream language. Python’s asyncio is workable but bolted-on, and most Python AI code is still synchronous because that’s what the libraries assume.
4. Web-standard APIs
Modern JavaScript runtimes (Deno, Bun, Cloudflare Workers, the browser) all implement the same Fetch API, the same Streams API, the same Web Crypto API. An agent written for one runs on all of them. Python has a different HTTP library every six months and no consensus on async I/O.
What this means for businesses
If Dahl is right — and we think he’s mostly right — here’s the practical takeaway for anyone making technology choices today:
- Don’t pick Python for an AI-adjacent product just because the research is in Python. The research is in Python; the production system rarely needs to be. Inference happens behind a JSON API; what calls that API is wide open.
- Pick a runtime with a real security model. If you’re going to let LLMs generate code that runs in your environment — even sandboxed user code in a side feature — Deno’s permission flags give you a guarantee Node and Python can’t.
- Use TypeScript. When the model is producing code, types catch a lot of the mistakes that humans would have caught in review. Deno has TypeScript built in. Node finally does as of v22.
- Don’t conflate “AI” with “an LLM call”. The LLM is one component. The infrastructure around it — auth, rate limiting, audit logging, prompt-injection defence — is most of the real work, and it’s software engineering, not data science.
The honest counter-argument
The case against Dahl: the Python ecosystem has enormous inertia. Every AI library, every framework, every model SDK ships Python first. JavaScript SDKs lag by months. For a small team trying to ship something this quarter, going against the grain has a cost.
Our take: for institutional clients shipping production software that’ll be running in 2031, the Dahl argument lands. The runtime decisions you make now will outlast at least three generations of AI tooling. Pick the one with a security model you can defend to a regulator.
For start-ups racing to a quarterly milestone? Use whatever your team writes fastest. The model layer is going to be replaced anyway.
If you’re trying to figure out which side of this debate matters for the system you’re about to build, we’re always happy to argue both sides.