I think that's a fear I have about AI for programming (and I use them). So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces. Though we can build commercial products quickly and easily, no one really writes code for difficult problem spaces so no one builds up expertise in important subdomains for a generation. Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects? How will AI be able to do new things well if it was trained on what people wrote previously and no one writes novel code themselves? It seems to me like AI is pretty dependent on having a corpus of human made code so, for example, I am not sure if it will be able to learn how to write very highly optimized code for some ISA in the future.
>So let's say we have a generation of people who use AI tools to code and no one really thinks hard about solving problems in niche spaces.
I don't think we need to wait a generation either. This probably was a part of their personality already, but a group of people developers on my job seems to have just given up on thinking hard/thinking through difficult problems, its insane to witness.
Exactly. Prose, code, visual arts, etc. AI material drowns out human material. AI tools disincentivize understanding and skill development and novelty ("outside the training distribution"). Intellectual property is no longer protected: what you publish becomes de facto anonymous common property.
Long-term, this is will do enormous damage to society and our species.
The solution is that you declare war and attack the enemy with a stream of slop training data ("poison"). You inject vast quantities of high-quality poison (inexpensive to generate but expensive to detect) into the intakes of the enemy engine.
LLMs are highly susceptible to poisoning attacks. This is their "Achilles' heel". See: https://www.anthropic.com/research/small-samples-poison
We create poisoned git repos on every hosting platform. Every day we feed two gigabytes of poison to web crawlers via dozens of proxy sites. Our goal is a terabyte per day by the end of this year. We fill the corners of social media with poison snippets.
There is strong, widespread support for this hostile posture toward AI. For example, see: https://www.reddit.com/r/hacking/comments/1r55wvg/poison_fou...
Join us. The war has begun.
AI will be trained on the code it wrote, our feedback on that code, and the final clean architecture(?) working(?) result after that feedback.
> Then what will AI be trained on in let's say 20-30 years? Old code? It's own AI developed code for vibe coded projects?
I’ve seen variation of this question since first few weeks /months after the release of ChatGPT and I havent seen an answer to this from leading figures in the AI coding space, whats the general answer or point of view on this?