> The flip side of this is that, thanks to LLMs, working on a minority platform isn't the barrier that you might expect
This is a nice thought, but with Agda in particular it's just not true. It's one of the few languages I've seen that's sufficiently unrepresented in training data. Frontier LLMs (Codex, Claude Code) reliably say "I realized I can't do this." after wasting lots of tokens going back and forth.
In fact, I think this positions Agda uniquely poorly.
Huh.
I wonder how compiler technology would do. Possibly as a component of an attempted solution.
This. LLMs are no good at stuff they haven't seen a lot of training data for (saying this as a SystemVerilog programmer who also does some C-coding for interfacing with SystemVerilog, neither of which has a lot of open source code to show LLMs)