logoalt Hacker News

notnullorvoidtoday at 4:07 AM8 repliesview on HN

Some of us do actually have intimate knowledge in certain areas where guidance of an AI takes longer than doing it yourself. It's not about typing speed, it's that when you know something really really well the solution/code is already known to you or the very act of thinking about the problem makes the solution known to you in full. When that happens it's less text to write that solution than it is to write a sufficient description of the solution to AI (not even counting the back and forth required of reviewing the AI output and correcting it).


Replies

bulbartoday at 4:14 AM

Giving a precise description of what the computer is supposed to do is exactly what programming is.

The more specific your requirements the closer you get to natural language not being useful anymore.

show 3 replies
lmeyerovtoday at 4:39 AM

Maybe a failure to automate?

The volume of people successfully adopting agentic engineering practices suggests this stuff isn't rocket science, but it is a learned skill and takes setup.

A year later into heavy AI coding, my experience is what you're describing should aid in being able to run 5+ agents simultaneously on a project because you know what you're doing, you set it up right, and you know how to tell agents to leverage that properly.

show 3 replies
lukantoday at 11:23 AM

Yes, there are still many areas where skilled humans are faster than AI (meaning faster coding yourself, than providing so much context and guidance that the AI can do it on its "own").

But in general the statement is really not true anymore, generic projects/problems have a pretty good chance that the AI can one shot a working solution from a lazily typed vague prompt.

Fr0styMatt88today at 5:44 AM

Yeah it’s when you go off the happy path that it gets difficult. Like there’s a weird behaviour in your vibe-coded app that you don’t quite know how to describe succinctly and you end up in some back-and-forth.

But man AI is phenomenal for getting stuff out of your head and working quick.

Tuna-Fishtoday at 10:10 AM

That doesn't matter. The statement wasn't "faster than AI right now", it was "will always be faster than AI". And that's just nonsense.

Current AI systems are extremely serial, in that very little of the inherent parallelism of the problem is utilized. Current-gen AI systems run at most a few hundreds of thousands of operations in parallel, while for frontier models, billions of operations could be run in parallel. Or in other words, what currently takes AI 8 hours will take it barely long enough for you to perceive the delay after you release the enter key.

For a demo, play around with https://chatjimmy.ai/ , the AI chatbot of Taalas, where they etched the model into silicon in a distributed way, instead of saving it in RAM and sucking it to execution units by a straw. It's a 8B parameter model, so it's unsuitable for complex problems, but the techniques used for it will work for larger models too, and they are working to get there.

And even Taalas is very far from the limits. Modern better quality LLM chatbots operate at ~40 tokens per second. The Taalas chatbot operates at 17000 tokens/s. If you took full advantage of parallelism, you should be able to have a latency of low hundreds of clock cycles per token, or single request throughput of tens of millions of tokens per second. (With a fully pipelined model able to serve one token per clock cycle, from low hundreds of requests.) Why doesn't everyone do it like that right now? Because to do this, you need to etch your model into silicon, which on modern leading edge manufacturing is a very involved process that costs hundreds of millions+ in development and mask costs (we are not talking about single chips here, you can barely fit that 8B model into one), and will take around a year. So long as the models keep improving so much that a year-old model is considered too old to pay back the capital costs, the investment is not justified. But when it will be done, it will not just make AI faster, it will also make it much more energy-efficient per token. Most of the energy costs are caused by moving data around and loading/storing it in memory.

And I want to stress that none of the above is dependent on any kind of new developments or inventions. We know how to do it, it's held back only by the pace of model improvement and economics. When models reach a state of truly "good enough", it will happen. It feels perverse to me that people are treating this situation as "there was a per-AI period that worked like X, now we are in a post-AI period and we have figured out that it will work like Y". No. We are at the very bottom of a very steep curve, and everything will be very different when it's over.

threethirtytwotoday at 6:40 AM

I don't believe this. Either you're lying, or you just haven't caught on with how to use Agentic AI.

Everything I do to interact with my computer is through an agent now.

show 1 reply
laroditoday at 5:10 AM

Care to explain which particular intimate knowledge allowed you in the last 6-9 months to be faster than AI in certain area?

Honestly, I'm still faster than AI cooking scrambled eggs, but definitely not faster than neither AI (or compiler) in translating stuff into code.

show 3 replies
squidbeaktoday at 8:42 AM

> Some of us do actually have intimate knowledge in certain areas where guidance of an AI takes longer than doing it yourself.

You speak as if AI development is frozen, and you ignore the poster's point:

> that gap will only increase as LLMs get more intelligent