logoalt Hacker News

habineroyesterday at 8:27 PM1 replyview on HN

I love Hank, but he has such a weird EA-shaped blind spot when it comes to AI. idgi

It is true that "more diversity in code" probably means less turnkey spray-and-pray compromises, sure. Probably.

It also means that the models themselves become targets. If your models start building the same generated code with the same vulnerability, how're you gonna patch that?


Replies

kay_oyesterday at 10:07 PM

> start building the same generated code with the same vulnerability

This situation is pretty funny to me. Some of my friends who arent technical tried vibe coding and showed me what they built and asked for feedback

I noticed they were using Supabase by default, pointed out that their database was completely open with no RLS

So I told them not to use Supabase in that way, and they asked the AI (various diff LLMs) to fix it. One example prompt I saw was: please remove Supabase because of the insecure data access and make a proper secure way.

Keep in mind, these ppl dont have a technical background and do not know what supabase or node or python is. They let the llm install docker, install node, etc and just hit approve on "Do you want to continue? bash(brew install ..)"

Whats interesting is that this happened multiple times with different AI models. Instead of fixing the problem the way a developer normally would like moving the database logic to the server or creating proper API endpoints it tried to recreate an emulation of Supabase, specifically PostgREST in a much worse and less secure way.

The result was an API endpoint that looked like: /api/query?q=SELECT * FROM table WHERE x

In one example GLM later bolted on a huge "security" regular expression that blocked , admin, updateadmin, ^delete* lol