> The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI.
Not this generation of AI though. It's a text predictor, not a logic engine - it can't find actual flaws in your code, it's just really good at saying things which sound plausible.
And not this or any existing generation of people. We're bad a determining want vs need, being specific, genericizing our goals into a conceptual framework of existing patterns and documenting & explaining things in a way that gets to a solid goal.
The idea that the entire top down processes of a business can be typed into an AI model and out comes a result is again, a specific type of tech person ideology that sees the idea of humanity as an unfortunate annoyance in the process of delivering a business. The rest of the world see's it the other way round.
I would have agreed with you a year ago
Absolutely nuts, I feel like I'm living in a parallel universe. I could list several anecdotes here where Claude has solved issues for me in an autonomous way that (for someone with 17 years of software development, from embedded devices to enterprise software) would have taken me hours if not days.
To the nay sayers... good luck. No group of people's opinions matter at all. The market will decide.
If you only realized how ridiculous your statement is, you never would have stated it.
While I agree, if you think that AI is just a text predictor, you are missing an important point.
Intelligence, can be borne of simple targets, like next token predictor. Predicting the next token with the accuracy it takes to answer some of the questions these models can answer, requires complex "mental" models.
Dismissing it just because its algorithm is next token prediction instead of "strengthen whatever circuit lights up", is missing the forest for the trees.
You’re committing the classic fallacy of confusing mechanics with capabilities. Brains are just electrons and chemicals moving through neural circuits. You can’t infer constraints on high-level abilities from that.
Your brain is a slab of wet meat, not a logic engine. It can't find actual flaws in your code - it's just half-decent at pattern recognition.
> it can't find actual flaws in your code
I can tell from this statement that you don't have experience with claude-code.
It might just be a "text predictor" but in the real world it can take a messy log file, and from that navigate and fix issues in source.
It can appear to reason about root causes and issues with sequencing and logic.
That might not be what is actually happening at a technical level, but it is indistinguishable from actual reasoning, and produces real world fixes.