logoalt Hacker News

The L in "LLM" Stands for Lying

589 pointsby LorenDBtoday at 4:02 AM412 commentsview on HN

Comments

raincoletoday at 8:34 AM

> Video games stand out as one market where consumers have pushed back effectively

No, it's simply untrue. Players only object against AI art assets. And only when they're painfully obvious. No one cares about how the code is written.

If you actually read the words used in Steam AI survey you'll know Steam has completely caved in for AI-gen code as well. It's specifically worded like this:

> content such as artwork, sound, narrative, localization, etc.

No 'code' or 'programming.'

If game players are the most anti-AI group then it's crystal clear that LLM coding is inevitable.

> This stands in stark contrast to code, which generally doesn't suffer from re-use at all, or may even benefit from it, if it's infrastructure.

Yeah, exactly. And LLM help developers save time from writing the same thing that has be done by other developers for a thousand times. I don't know how one can spins this as a bad thing.

> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.

Spore is well acclaimed. Minecraft is literally the most sold game ever. The fact one developer fumbled it doesn't make the idea of procedural generation bad. This is a perfect example of that a tool isn't inherently good or bad. It's up to the tool's wielder.

show 24 replies
wolvesechoestoday at 12:27 PM

I am bit tired of such discussions.

I don't care if LLMs are good at coding or bad at it (in my experience the answer is "it depends"). I don't care how good are they at anything else. What matters in the end is that this tech is not to empower a common person (although it could). It is not here to make our lives better, more worthwhile, more satisfying (it could do these as well). It is there to reduce our agency, to make it easier to fire us, to put us in even more precarious position, to suck even more wealth from those that have little to those that have a lot.

Yet what I see are pigs discussing the usefulness of bacon-making machine just because it also happens to be able to produce tasty soybean feed. They forget that it is not soybean feed that their owner bought this machine for, and that their owner expects a return from such investment.

show 8 replies
noemittoday at 11:08 AM

Many people don't know this, but the Luddites were right. I studied Art History and this particular movement. One of the claims of the Luddites is that quality would go down, because their craft took half a lifetime to master (it was passed down from parent to chile.)

I was able to feel wool scarves made in europe from the middle ages. (In museum storage, under the guidance of a curator) They are a fundamentally different product than what is produced in woolen mills. A handmade (in the old traditiona) woolen scarf can be pulled through a ring, because it is so thin and fine. Not so for a modern mill-made scarf.

Another interesting thing is that we do not know how they made them so fine. The technique was never recorded or documented in detail, as it was passed down from parent to child. So the knowledge is actually lost forever.

Weavers in Kashmir work a similar level of quality, but their wool is different, their needs and techniques are different, so while we still have craftsman that can produce wool by hand, most of the traditions and techniques are lost.

Is it a tragedy? I go back and forth. Obviously the heritage fabrics are phenomenal and luxurious. Part of me wishes that the tradition could have been maintained through a luxury sector.

Automation is never a 1:1 improvement. It's not just about the speed or process. The process itself changes the product. I don't know where we will net out on software, and I do think the complaints are justified - but the Luddites were also justified. They were *Right*. Their whole argument was that the mills could not product fabric of the same quality. But being right is not enough.

I'm already seeing vibe-coded internal tools at an org I consult at saving employees hundreds of hours a month, because a non-technical person was empowered to build their own solution. It was a mess, and I stepped in to help optimize it, but I only optimized it partially, making it faster. I let it be the spaghetti mess it was for the most part - why? because it was making an impact already. The product was succeeding. And it was a fundamentally different product than what internal tools were 10 years ago.

show 14 replies
simianwordstoday at 8:26 AM

What the author and many others find hard to digest is that LLMs are surfacing the reality that most of our work is a small bit of novelty against boiler plate redundant code.

Most of what we do is programming is some small novel idea at high level and repeatable boilerplate at low level. A fair question is: why hasn’t the boilerplate been automated as libraries or other abstractions? LLMs are especially good at fuzzy abstracting repeatable code, and it’s simply not possible to get the same result from other manual methods.

I empathise because it is distressing to realise that most of value we provide is not in those lines of code but in that small innovation at the higher layer. No developer wants to hear that, they would like to think each lexicon is a creation from their soul.

show 20 replies
hwerstoday at 10:19 AM

Its unfortunate that there’s mode collapse around what the consensus “best way” to use these things are. It’s too bad we didn’t have a period where these things were great teachers but didn’t attempt to write code because in my opinion the ideal way to use them is not by agents mass producing sloppy buggy disorganized code, but to teach you things way faster than the old alternatives, rubber duck, and occasionally write snippets of functions when your brain is too tired or it’s throwaway cli code or some api you’re not familiar with.

show 2 replies
theshrike79today at 7:28 AM

> This sort of protectionism is also seen in e.g. controlled-appelation foods like artisanal cheese or cured ham. These require not just traditional manufacturing methods and high-quality ingredients from farm to table, but also a specific geographic origin.

Maybe "Artisanal Coding" will be a thing in the future?

show 3 replies
DavidPipertoday at 8:21 AM

> This stands in stark contrast to code, which generally doesn't suffer from re-use at all ...

This is an absolute chef-kiss double-entendre.

sowbugtoday at 7:53 PM

This is just the TTP metric[1] all over again, isn't it?

[1] https://knowyourmeme.com/sensitive/memes/time-to-penis-ttp

doodaddytoday at 10:38 AM

There’s a cold reality that we in this profession have yet to accept: nobody cares about our code. Nobody cares whether it’s pretty or clever or elegant. Sometimes, rarely, they care whether it’s maintainable.

We are only craftsmen to ourselves and each other. To anyone else we are factory workers producing widgets to sell. Once we accept this then there is little surprise that the factory owners want us using a tool that makes production faster, cheaper. I imagine that watchmakers were similarly dismayed when the automatic lathe was invented and they saw their craft being automated into mediocrity. Like watchmakers we can still produce crafted machines of elegance for the customers who want them. But most customers are just going to want a quartz.

show 4 replies
simonwtoday at 1:38 PM

Wait for the header animation to run and then hit the "play" button on the about page, it's very cool: https://acko.net/about

(Worked in Firefox on macOS, doesn't seem to work in Mobile Safari)

show 1 reply
einrtoday at 7:09 AM

This rules. What a good, sensible, sober post.

plasticeagletoday at 8:24 AM

Acko.net remains the best website on the internet.

show 1 reply
Copenjintoday at 7:59 AM

I instantly remembered the page header, I probably visited this site last time 10 years ago or something.

stuaxotoday at 3:13 PM

I've been thinking this for a while. Attribution is sorely needed, I believe there was a recent paper on this so maybe it will happen.

In the meantime the only way to really sort this is to have models that have only been trained on some particular kind of license - there is quite a big corpus of GPL'd code out there so a GPL based model could potentially be one of the first, and of course the output could only be GPL.

Retr0idtoday at 4:42 PM

> Classic procedural generation is noteworthy here as a precedent, which gamers were already familiar with, because by and large it has failed to deliver.

With the notable exception of Minecraft terrain generation, which I think most would say was successful in what it set out to achieve.

show 2 replies
chmod775today at 3:09 PM

Content aside, that website is very nice. It goes a bit crazy with animations in places (header, switching articles), but that does not taint the experience of reading something on it in the slightest.

It's feels like a modern twist on a bygone time of the web.

emsigntoday at 8:09 AM

> It's not a co-pilot, it's just on auto-pilot.

Love it. Calling it "Copilot" in itself is a lie. Marketing speak to sell you an idea that doesn't exist. The idea is that you are still in control.

show 2 replies
kombookchatoday at 7:13 AM

What a wonderful read.

anilgulechatoday at 8:15 AM

>If you ask me, no court should have ever rendered a judgement on whether AI output as a category is legal or copyrightable, because none of it is sourced. The judgement simply cannot be made, and AI output should be treated like a forgery unless and until proven otherwise.

Guilty until proven innocent will satisfy the author's LLM-specific point of contention, but it is hardly a good principle.

show 1 reply
dosticktoday at 12:50 PM

It seems like with time hallucinations and lying increased, it’s very different now from what it was 2 years ago. is this because of training bias ? Is there any research data on dynamics over past years ?

vladmstoday at 8:31 AM

> Whether something is a forgery is innate in the object and the methods used to produce it. It doesn't matter if nobody else ever sees the forged painting, or if it only hangs in a private home. It's a forgery because it's not authentic.

On a philosophical level I do not get the discussions about paintings. I love a painting for what it is not for being the first or the only one. An artist that paints something that I can't distinguish from a Van Gogh is a very skillful artist and the painting is very beautiful. Me labeling "authentic" it or not should not affect it's artistic value.

For a piece of code you might care about many things: correctness, maintainability, efficiency, etc. I don't care if someone wrote bad (or good) code by hand or uses LLM, it is still bad (or good code). Someone has to take the decision if the code fits the requirements, LLM, or software developer, and this will not go away.

> but also a specific geographic origin. There's a good reason for this.

Yes, but the "good reason" is more probably the desire of people to have monopolies and not change. Same as with the paintings, if the cheese is 99% the same I don't care if it was made in a region or not. Of course the region is happy because means more revenue for them, but not sure it is good.

> To stop the machines from lying, they have to cite their sources properly.

I would be curious how can this be applied to a human? Should we also cite all the courses, articles that we have read on a topic when we write code?

show 5 replies
nerdyadventurertoday at 3:05 PM

Problems is these likes can be sophisticated which can eat up hours of our time.

fredgrotttoday at 9:01 PM

There are reasons why I measure code quote input unquote via testing and static code analysis:

1. re-establishes mind mapping of the code base 2. separates out noise from signal. 3. makes it a breeze to refactor to reduce code complexity.

Notice anything yet?

If I replace the AI code input with my own curated techniques from known and measured code sources in long-term I save more time than those who rely upon AI vibe coding.

Extasia785today at 1:00 PM

I don't agree with many statements in the article. It almost seems like an article from about a year ago, despite it being posted yesterday. Not sure if the author had the idea a long time ago and just took his time to finish it up, but the "vibe-coding" he describes surely isn't the current way of using LLMs in a codebase.

While LLMs are surely used to generate a lot of slop-code and overwhelm (open source) code bases, this surely isn't the only thing they can do. I dislike discussing the potential of a technology exclusively by looking at its negative impact.

LLMs in proper hands don't create code which is "stolen", they also shouldn't create unnecessary code and definitely don't remove any of the ownership of the programmer, at least not any more than using a mighty IDE does.

The problem seems to be in the usage of LLMs. These effects definitely do happen when just releasing an agent on a codebase without any oversight. But they can also largely be mitigated by using frameworks such as Openspec or Spec-Kit, properly designing a spec, plan, granular tasks and manually reviewing all code yourself. The LLM should not be responsible for any creative idea, it should at most verify the practicality against the codebase. When doing that, the entire creative control is in the hands of the programmer and so is the mechanical execution. The LLM is reduced to a very powerful autocomplete with a strict harness around it. Obviously this also doesn't lead to 10x or even 100x improvements in speed like some AI merchants promise, but in my personal experience the speedup is still significant enough to make LLMs a very, very useful technology.

esttoday at 8:21 AM

I won't call that forging, but commission.

btw you can make git commits with AI as author and you as commiter. Which makes git blame easier

bhekaniktoday at 2:00 PM

I think the framing as “lying” is emotionally accurate for users but technically misleading for builders. Most failures I see in production aren’t intentional deception, they’re confidence calibration failures: the model sounds certain when it should sound tentative.

What helped us most was treating outputs like untrusted input: retrieval + citations for factual claims, strict tool permissions, and an explicit “I don’t know” path that’s rewarded instead of punished. That doesn’t make LLMs truthful, but it does make them a lot less brittle.

So to me the key question isn’t “are models liars?” but “what product and engineering constraints make wrong answers cheap and detectable?”

GaryBlutotoday at 9:35 AM

I think it says a lot about this opinion piece that the people agreeing with it are posting short comments saying "So true!" and "Great!" whilst the people criticizing it are writing paragraphs of well-spoken criticism.

show 2 replies
youknownothingtoday at 3:49 PM

which of the two L's?

falcor84today at 2:55 PM

Which of the two Ls?

liampullestoday at 11:36 AM

I see a future where I program at work less, which is sad but c'est la vie. I think the challenge of the job will be heralding and managing my own context for larger codebases managed by smaller teams, and finding ways to allow for more experimental/less verified code in prod. And plenty of consulting work for companies which have vibe coded their business and who are left with a totally fucked data model (if not codebase).

A Private (system) Investigator. :)

notepad0x90today at 12:58 PM

The framing of an LLM's response as truth vs lie is in itself incorrect.

In order to lie, one needs to understand what truth and objective reality are.

Even with people, when a flat-earther tells you the earth is flat, they're not lying, they're just wrong.

All LLM output is speculation. All speculation, by definition, has some probability of being incorrect.

---

We can go even deeper in a philosophical sense. If I made the audacious claim that 2 +2 = 4, I may think it's true, but I'm still speculating that the objective reality I experience is the same one others also experience, and that my senses and mental faculties, and therefore the qualia making up my reality, are indeed intact, correct, and functional. So is there a degree of speculation when I made that claim?

Regardless, I am able to agree upon a shared reality with the rest of the world, and I also share a common understanding of truth and untruth. If I lied, it can only be caused by an intention to mislead others. For example, if I claimed to be the president of the united states, of course that would be incorrect (thankfully!), but since we all agree that no one reading this post would actually be mislead into thinking I am the POTUS, then it isn't a lie. Perhaps sarcasm, a failed attempt at humor, or just trolling. it is untruth, but it isn't a lie, no one was mislead. You need intent (LLM isn't capable of one), and that intent needs to be at least in part, an intent to mislead.

show 2 replies
Kwpolskatoday at 11:31 AM

> Engineers who know their craft can still smell the slop from miles away when reviewing it, despite the "advances" made. It comes in the form of overly repetitive code, unnecessary complexity, and a reluctance to really refactor anything at all, even when it's clearly stale and overdue.

I’ve seen reluctance to refactor even 10+-year-old garbage long before LLMs were first made available to the broader public.

show 2 replies
adamtaylor_13today at 2:23 PM

This argument falls apart on the very first bullet point. The author claims:

> If someone produces a painting in the style of Van Gogh, and passes it off as being made by Van Gogh, by putting his signature on it, that painting is a forgery.

Which is true. But the implication that follows is false.

Van Gogh's artwork is valuable specifically because of his identity. I find much of his artwork particularly hideous. That's fine! Someone else finds value in it specifically because of who wrote it.

This metaphor doesn't appear to apply to code at all. The entire value of code is what it does not who wrote it.

Honestly, I stopped reading after the first bullet point because these types of arguments feel lazy and the attitude of the people writing these articles frequently comes across as holier than thou.

You don't like LLMs? Great, don't use them. Using Van Gogh's paintbrush doesn't mean I'm making a forgery. I'm just painting, my friend.

visargatoday at 4:56 PM

Stochastic Parrots hits back! another author thinks LLMs only reproduce. If that were so we could have used `cp` for free

wilgtoday at 7:43 AM

LLMs are pretty cool technology and are useful for programming.

show 1 reply
teleforcetoday at 3:01 PM

Whether we like it or not, the only constant in life is change.

>What's the Excel of JSON

Ever heard of CUE that's compatible with JSON and YAML introduced by ex-Googlers? It seamlessly support both types and values, whereas Excel supports ephemeral values [1].

Both CUE and original Excel are non-Turing complete so they don't have the notorious and tricky halting problem.

Someone need to seamlessly integrate LLM with CUE, its NLP deterministic distant cousin based on lattice-valued logic [2],[3].

Truth be told LLM are like the automated loom machine during 19th CE Britain that kick started the industrial revolution. Heck the Toyota conglomerate was once the pioneer of the modern automated loom manufacturer, and looks where they are now after embracing change and pivoted to vehicle manufacturing.

The automated loom machine commoditize the manual looming industry (not unlike modern software engineering) to its oblivion in India, that turned the rich Moghul India with the highest GDP in the whole wide world into the lowest GDP for India during colonial time (include Indian sub-continent namely Afghanistan, Pakistan and Bangladesh here if you want apple to apple comparison) [4].

Ignore LLM at your peril in the name of so-called moral authenticity/forgery/lie/etc, and you can go the way of 20th CE India and its sub-continent, settling only at a fraction of its Moghul empire in term of GDP at its very peak.

> Is there a standard CRDT-like protocol for syncing editable graphs yet?

It's for other HN comments but spoiler alert it's called D4M by the nice folks from MIT [5]. We probably don't need full CRDT, local-first capability with eventual consistency will be more than suffice for most things that are of importance.

[1] CUE lang:

https://cuelang.org/

[2] The Logic of CUE:

https://cuelang.org/docs/concept/the-logic-of-cue/

[3] Guardrailing Intuition: Towards Reliable AI:

https://cue.dev/blog/guardrailing-intuition-towards-reliable...

[4] Economy of the Mughal Empire:

https://en.wikipedia.org/wiki/Economy_of_the_Mughal_Empire

[5] D4M: Dynamic Distributed Dimensional Data Model:

https://d4m.mit.edu/

barcodehorsetoday at 6:58 AM

Lovely lizard machine.

incomingpaintoday at 1:35 PM

I find these criticisms to be similar to those low context brain teasers.

If a woman gets married at 25 and her kid is 25, how old is she?

This is what LLMs are dealing with. You dont tell them everything they need to know and they are left to fill in the gaps. Which may, and sometimes often means they lie.

That's what Agentic does differently, it'll go find the gaps before answering.

Agentic is AGI. You can hire many minimum wage workers who are generally intelligent who dont even go to that level.

show 1 reply
cess11today at 1:25 PM

What weirds me out is that it seems few US corporations care that they don't have copyright to their synthesised code, if the rumours regarding this are correct.

If you don't have the copyright, then you can't license or litigate it under the common rules of software. If someone 'steals' it you can at best go after them with some trade secret case, and I suspect this would be limited if you had already shared the code with them, e.g. because they helped you synthesise it.

GuestFAUniversetoday at 7:23 AM

And "lazy".

Claude makes me mad: even when I ask for small code snippets to be improved, it increasingly starts to comment "what I could improve" in the code I stead of generating the embarrassingly easy code with the improvement itself.

If I point it to that by something like "include that yourself", it does a decent job.

That's so _L_azy.

show 2 replies
spacecadettoday at 12:34 PM

The authors logic only works for software engineers and as I have said time and time again- software engineers have been automating people out of their passions for decades and now it has come for yours... The lying here is LYING TO YOURSELVES.

nathiastoday at 12:21 PM

I'm selling hand-crafted template code if anoyne is interested.

feverzsjtoday at 6:45 AM

More like Lunatic.

show 1 reply
baqtoday at 7:18 AM

Lying implies knowing what’s true

show 1 reply
RS-232today at 7:36 PM

More desperate virtue signaling and gatekeeping.

“Look at me and the code that took me eons to perfect. It’s handcrafted and genuine.”

Newsflash: nobody cares, especially if it’s expensive, time consuming, or doesn’t work. They also don’t care about “artisanal” PDO cheese with a 30% tariff that still tastes like shit.

“The posers are stealing our thunder. Forgery!! They terk r jerbs!”

Sink or swim. You decide.

whywhywhywhytoday at 1:30 PM

Sorry to say this feels like cope from someone who is a talented developer in more niche areas finally seeing AI start to solve problems in those areas.

DonHopkinstoday at 10:38 AM

Pretend Intelligence (PI) — Design Note & Tribute

A short design note and tribute to Richard Stallman (RMS) and St. IGNUcius for the term Pretend Intelligence (PI) and the ethic behind it: don’t overclaim, don’t over-trust, and don’t let marketing launder accountability.

https://github.com/SimHacker/moollm/blob/main/designs/PRETEN...

1. What PI Is

Richard Stallman proposes the term Pretend Intelligence (PI) for what the industry calls “AI”: systems that pretend to be intelligent and are marketed as worthy of trust. He uses it to push back on hype that asks people to trust these systems with their lives and control.

From his January 2026 talk at Georgia Tech (YouTube, event, LibreTech Collective):

https://www.youtube.com/watch?v=YDxPJs1EPS4

> "So I've come up with the term Pretend Intelligence. We could call it PI. And if we start saying this more often, we might help overcome this marketing hype campaign that wants people to trust those systems, and trust their lives and all their activities to the control of those systems and the big companies that develop and control them." — Richard Stallman, Georgia Tech, 2026-01-23. Source: YouTube (full talk) — "Dr. Richard Stallman @ Georgia Tech - 01-23-2026," Alex Jenkins, CC BY-ND 4.0; transcript in video description.

So PI is both a label (call it PI, not AI) and a stance: resist the campaign to make people trust and hand over control to systems and vendors that don’t deserve that trust. In MOOLLM we use the same framing: we find models useful when we don’t overclaim — advisory guidance, not a guarantee (see MOOAM.md §5.3).

[...]

Richard Stallman critiques AI, connected cars, smartphones, and DRM (slashdot.org) 42 points by MilnerRoute 38 days ago | hide | past | favorite | 10 comments

https://news.ycombinator.com/item?id=46757411

https://news.slashdot.org/story/26/01/25/1930244/richard-sta...

Gnu: Words to Avoid: Artificial Intelligence:

https://www.gnu.org/philosophy/words-to-avoid.html#Artificia...

...currently not responding... archive.org link:

https://web.archive.org/web/20260303004610/https://www.gnu.o...

aeon_aitoday at 1:39 PM

This is such a comically bad take.

The use of loaded and pejorative language like "forgery" emphasizes that this is not a logical argument, but a moral one. The repeated comparisons to "true craft" reveals the author would prefer that code be regarded like artisanal cheese.

Beyond the pretension, it's head in the sand to imply that the technology hasn't progressed. It's just very clearly not true to anyone who is paying attention - longer tasks, better code, less errors. I'm somebody who actively despises the hype bullshit-machine that SV has turned into, but technology is an industry for pragmatists that can leverage what works. And LLMs do.

If you don't like the technology, you have every right to scream that from the mountaintops. As it stands, this just serves as no more than a rallying cry to the ignorant.

show 1 reply
einpoklumtoday at 10:28 AM

> Open source software maintainers have been one of the first to feel the downsides. ... The last thing they needed was to receive slop-coded pull requests from contributors merely looking to cheat their way into having a credible GitHub resumé... As a result, projects have closed down public contributions and dropped their bug bounties...

Has this really been people's experience?

I develop and maintain several small FOSS projects, some of which are moderately popular (e.g. 90,000-user Thunderbird extension; a library with 850 stars on GitHub). So, I'm no superstar or in the center of attention but also not a tumbleweed. I've not received a single AI-slop pull request, so far.

Am I an exception to the rule? Or is this something that only happens for very "fashionable" projects?

🔗 View 11 more comments