logoalt Hacker News

Tade0yesterday at 9:18 AM10 repliesview on HN

> The benefits we get from checking in with other humans, like error correction, and delegation can all be done better by AI.

Not this generation of AI though. It's a text predictor, not a logic engine - it can't find actual flaws in your code, it's just really good at saying things which sound plausible.


Replies

xnorswapyesterday at 10:15 AM

> it can't find actual flaws in your code

I can tell from this statement that you don't have experience with claude-code.

It might just be a "text predictor" but in the real world it can take a messy log file, and from that navigate and fix issues in source.

It can appear to reason about root causes and issues with sequencing and logic.

That might not be what is actually happening at a technical level, but it is indistinguishable from actual reasoning, and produces real world fixes.

show 2 replies
weegoyesterday at 10:51 AM

And not this or any existing generation of people. We're bad a determining want vs need, being specific, genericizing our goals into a conceptual framework of existing patterns and documenting & explaining things in a way that gets to a solid goal.

The idea that the entire top down processes of a business can be typed into an AI model and out comes a result is again, a specific type of tech person ideology that sees the idea of humanity as an unfortunate annoyance in the process of delivering a business. The rest of the world see's it the other way round.

afro88yesterday at 4:19 PM

I would have agreed with you a year ago

laichzeit0yesterday at 1:08 PM

Absolutely nuts, I feel like I'm living in a parallel universe. I could list several anecdotes here where Claude has solved issues for me in an autonomous way that (for someone with 17 years of software development, from embedded devices to enterprise software) would have taken me hours if not days.

To the nay sayers... good luck. No group of people's opinions matter at all. The market will decide.

show 2 replies
lpapezyesterday at 10:51 AM

If you only realized how ridiculous your statement is, you never would have stated it.

show 2 replies
nazgul17yesterday at 1:16 PM

While I agree, if you think that AI is just a text predictor, you are missing an important point.

Intelligence, can be borne of simple targets, like next token predictor. Predicting the next token with the accuracy it takes to answer some of the questions these models can answer, requires complex "mental" models.

Dismissing it just because its algorithm is next token prediction instead of "strengthen whatever circuit lights up", is missing the forest for the trees.

p-e-wyesterday at 9:31 AM

You’re committing the classic fallacy of confusing mechanics with capabilities. Brains are just electrons and chemicals moving through neural circuits. You can’t infer constraints on high-level abilities from that.

show 1 reply
jatorayesterday at 9:54 AM

[flagged]

show 2 replies
ACCount37yesterday at 11:34 AM

Your brain is a slab of wet meat, not a logic engine. It can't find actual flaws in your code - it's just half-decent at pattern recognition.

show 2 replies