AI

Ryan Dahl on AI: why JavaScript will win the agent runtime war

HCOMS March 2026 7 min read

A year ago we said we’d trial Claude Code, Cursor and GitHub Copilot properly across the team for a quarter and report back. The trial turned into adoption. Here’s the honest report.

The top line: AI made our senior engineers measurably more productive, and our junior engineers measurably more dangerous. Most of what’s interesting is in the why.

What got genuinely faster

Boilerplate

Database migrations. CRUD scaffolds. Form-validation rules. CSV importers. The forty-minute task of writing a new Laravel resource for a domain you’ve modelled a hundred times before is now a four-minute task. Multiplied across a team, that’s a developer-day a week back per engineer.

Reading unfamiliar code

The clearest single win. Drop a stack trace into Claude with the relevant files, ask "what’s actually happening here?", and you get a clear plain-English explanation in seconds. For a team that does a lot of system-rescue work on inherited code, this is genuinely transformative.

Test scaffolding

"Generate ten edge-case tests for this function" produces a meaningfully better starting set than a tired engineer would write at 4pm. We don’t ship the AI tests as-is — they always need pruning and tightening — but the floor is much higher.

Translating between languages and frameworks

Porting a small SQL Server stored procedure to PostgreSQL. Converting a Vue component to React. Reading a Python script and writing the PHP equivalent. Used to be a half-day. Now it’s an hour, mostly spent on review.

What got slower

Code review

This caught us by surprise. With AI generating more code, more code lands in PRs. Reviewers have to look harder for the subtle errors a model introduces: the wrong-but-plausible variable name, the slightly-off business logic, the fake API method that doesn’t actually exist. Senior-engineer time moved from writing to reviewing, and the review now takes longer per line.

Net is still positive. But the productivity gain isn’t as dramatic as the marketing suggests.

Onboarding new engineers

This is the part we worry about most. A junior who reaches for Cursor before they’ve understood the problem doesn’t learn the way the senior engineers learned. They ship code they couldn’t have written, and six months later they can’t debug it because the mental model never got built. We’ve had to redesign the early months of our junior programme to enforce a rule of “understand before you autocomplete”.

The four anti-patterns we tell new joiners to avoid

1. "Vibe-coding"

Letting the model write 200 lines and accepting them because the tests pass. Tests pass for many wrong implementations. If you didn’t understand it line by line, you didn’t write it — and you can’t maintain it. Every line that lands in main has to be code you’d defend in review whether you typed it or accepted it.

2. Trusting the cited reference

Models still confidently invent API methods, package names, RFC numbers, Stack Overflow URLs. Verify every external reference before you commit. "Carbon::isWeekendish() doesn’t exist" is the modern "jquery.fn.weeklyHelper() doesn’t exist".

3. Letting AI design the architecture

Models are great at producing code given a clear design. They’re mediocre at choosing the design, because design choices depend on context the model doesn’t have — the codebase’s history, the team’s skills, the client’s constraints, the ten previous arguments about why not to do it the obvious way. Architecture is still a senior-engineer-with-a-whiteboard activity.

4. Auto-accepting refactors

"Refactor this to be cleaner" is the most dangerous prompt. Models love to refactor. They’ll silently introduce subtle behaviour changes — off-by-one errors, edge cases removed, error handling deleted. Refactor with a comprehensive test suite or don’t refactor at all.

The boring practical setup

What we landed on after twelve months of trial and error:

The verdict

On net: a senior engineer with AI is roughly thirty percent more productive than a senior engineer without. A junior engineer with AI is faster but produces more bugs and learns more slowly. The ratio swings on whether your team has the bench strength to review what the AI produces.

For a senior-heavy studio like ours, this has been a clean win. For a team that’s mostly junior we’d be more cautious. AI multiplies whatever the team is already doing, mistakes included.

If your team is figuring out the right adoption posture, happy to compare notes. We’re still learning too.

Related notes