I agree with you on everything you said here except:
> when you know how the thing works and have that mental context, you will always be faster than an AI
That's just plain false, honestly. No one can type at the speed AI can code, even factoring in the time you need to spend to properly write out the spec & design rules the AI needs to follow when implementing your app/feature/whatever. And that gap will only increase as LLMs get more intelligent.
In my experience AI can write _something_ from scratch, but often edge cases won't be handled until I go through and read the results or test it. Usually when I'm writing by hand I will naturally find the majority of edge cases as I go. By the time I've read through the results and fixed said edge cases, I usually would have been faster just doing it myself.
> No one can type at the speed AI can code
Don't we already have a weekly post nowadays explaining, again, that typing isn't the bottleneck?
It should be “…you will always be faster than someone _without the knowledge_ using an AI”
if you've never had the experience of handing something off to someone else being more laborious and slower than doing it yourself due to having to set constraints and define success, then you simply haven't held a senior enough position to comment on this with any authority
They probably mean faster to a higher-level goal rather than SLOC. Typing speed and SLOC have never been that useful for measuring productivity.
as i understood it he's referring to the overall time it takes to build a complete finished piece of software, accounting for the refactoring and bug fixes and all that. cause handn't you understood the tools you're using you would be running into roadblocks and that adds up
Plenty of cars can get off the line faster than an F1 car. But around a track, an F1 is by far the fastest in the world.
Going fast isn’t the difficult bit.
Where does this certainty that LLMs will get more intelligent stem from?
Except it's often faster to make the change yourself than explain it to an AI.
>No one can type at the speed AI can code
You can definitely be faster than frontier models. The number of tokens per second is not that high and they require a lot of tokens for thinking and navigating things.
> LLMs get more intelligent
The Spicy Autocomplete koolaid club is out in force today I see.
We clearly have different ideas of what the word "intelligent" means.
[dead]
Some of us do actually have intimate knowledge in certain areas where guidance of an AI takes longer than doing it yourself. It's not about typing speed, it's that when you know something really really well the solution/code is already known to you or the very act of thinking about the problem makes the solution known to you in full. When that happens it's less text to write that solution than it is to write a sufficient description of the solution to AI (not even counting the back and forth required of reviewing the AI output and correcting it).