logoalt Hacker News

Skyy93yesterday at 10:12 PM10 repliesview on HN

This article makes no real sense to me.

>You think of something new and express it - through a prompt, through code, through a product - it enters the system. Your novel idea becomes training data. The sheer act of thinking outside the box makes the box bigger.

This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.

The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.

>We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.

Why should this happen? The moment you make your idea public, anyone can build it. This leads to greater proliferation than before, when the artificial barrier of having to learn to code prevented people from getting what they wanted or what they wanted to create.


Replies

munificenttoday at 1:25 AM

> This was the same before, if you had a novel idea and make a product out of it others follow.

The article says:

"Ideas are cheap - execution is hard"

"Announcing, signaling your ideas offered much greater benefit than risk, because your value multiplied by connections, and execution was the moat you could stand behind."

That's the key difference. It used to be much harder for a competitor to catch up to the state of your implementation.

show 4 replies
RajT88yesterday at 10:37 PM

> This was the same before, if you had a novel idea and make a product out of it others follow.

You've almost captured the full picture of it.

If you have a great idea, it's not going to be self-evidently enough of a great idea until you've proved it can make money. That's the hard part which comes at great personal, professional and financial risk.

Algorithms are cheap. Sure, they could use your LLM history to figure out what you did. Or the LLM could just reason it out. It could save them some work, sure.

But again - the hard part is not cloning the product, it's stealing your customers. People don't seem to be focused on the hard parts.

show 4 replies
asdfftoday at 8:14 AM

Another thing the author makes a note of is the idea of the prompts getting logged. I can imagine with some clever statisticians, before you even formulate the idea yourself of some product or company, the model can construct this before you even do, just based on what you already prompted. Then it can evaluate market fit, estimate return, start its own version of that company and make money and beat you to market should it turn out to be a good idea.

Now before you say this is unrealistic or isn't done today, just know this is all perfectly possible with existing technology. In fact this is a lot how adtech works already, using metadata to predict products you might want to buy before you even realize you want to buy them.

8bitsruletoday at 3:07 AM

> This was the same before, if you had a novel idea and make a product out of it others follow.

March 20, 1926: Hungarian physicist, electrical engineer Kalman Tihanyi applies for his first patent for a fully electronic television system. Tihanyi's ideas are so essential that, in 1934, RCA is required to buy his patents.

Kalman who ?

AmbroseBiercetoday at 4:54 AM

I'm sure being exposed to one million video games instead of 100 works just the same, scarcity was a feature not a bug.

cryptonectortoday at 4:03 AM

> Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough.

First of all: it's not as though no new LLMs are being trained. Of course they are.

Second: learning LLMs are not far off, and since they can typically search the web via agents, they effectively can "learn" now, and they can learn (not so well) by writing stuff into a document hidden from you. Indeed, some LLMs can inspect your other sessions with them and refer to them in future sessions -- I've noticed this with Claude.

Third: already we see some AI companies wanting to train their models on your prompts. It's going to happen.

> The next thing is that we also have open source and open weight models that everyone of use with a decent consumer GPU can fine-tune and adapt, so its not only in the hands of a few companies.

There's a pretty good chance that LLMs buff open source, yes.

> > We will again build and innovate in private, hide, not share knowledge, mistakes, ideas.

> Why should this happen? The moment you make your idea public, anyone can build it. [...]

This was always the case, but now the cycle is faster. Therefore if you must use an LLM you might use an LLM that you run on your own hardware -- now your prompts are truly yours. But as TFA notes the AIs will learn just from your (and your private LLMs') searches, and that will be enough in some cases for them to figure out what you're up to. Oh sure, maybe the Microsofts and Googles of the world will not be able to capitalize on millions of interesting idea floating about, but still! the moment you uncloak the machine will eat your future alive, so you'll try to stay off its radar and build a moat it can't see (good luck!). Well that's what TFA says; it seems very plausible to me.

middaycyesterday at 10:59 PM

> This was the same before, if you had a novel idea and make a product out of it others follow. Especially for LLMs, they are not (till now) learning on the fly. Claude Opus 4.6 knowledge cut off was August 2025, so every idea you type in after this date is in the training data but not available, so you only have to be fast enough. Especially LLMs/AI-Agents like Claude enable this speed you need for bringing out something new.

You have a point about the update intervals and the higher speed they provide to developers. But you are talking about now, and I was making a thought experiment - about a potential future. LLM-s are not learning on the fly, but I suspect they do log the conversations, their responses and could also deduce from further interaction if a particular response was satisfactory to the user. So in a world where available training data is drying up, nobody is throwing all this away. Gemini even has direct upvote/downvote on responses. Algorithms will probably improve, and the intervals will probably shorten.

Given the detailed information that all the back and forwards generate - I think it's not hard to use similar technology to track underlying trends, get all the problems associated with them and all the solution space that is talked about - and generate the solution before even the ones who thought of it release it. Theoretically :)

I think the open development will become less open. I don't like it - but I think it's already happening. First - all the blogs and forums moved to specialized platforms (SO, discords, ..) and now event some of those are d(r)ying. If people (in extreme cases) don't even read the code they produce, why would they read about the code, discuss the code, that's not even in their care. That is without the theoretical fear of the global Borg slurping all they write.

show 1 reply
raincoletoday at 5:01 AM

The author read too much sci-fi. But too little at the same time.

The problem is never we don't have enough ideas. It's how to find the good ones among the sea of ideas. Most of ideas that eventually prove right sounded very stupid at first. Selling books online? Pff.

By the way, Liu (the author of The Three-Body Problem, who popularized the concept of "Dark Forest") has a short story about exactly that, Cloud of Poems. Unfortunately it's never translated into English.

tayo42today at 2:19 AM

>Especially for LLMs, they are not (till now) learning on the fly.

Was this just awkward phrasing or did something change and they learn after training?

show 1 reply
annie511266728today at 2:47 AM

[dead]