logoalt Hacker News

avaeryesterday at 6:19 AM2 repliesview on HN

Remember when GPT-3 had a $100 spending cap because the model was too dangerous to be let out into the wild?

Between these models egging people on to suicide, straightforward jailbreaks, and now damage caused by what seems to be a pretty trivial set of instructions running in a loop, I have no idea what AI safety research at these companies is actually doing.

I don't think their definition of "safety" involves protecting anything but their bottom line.

The tragedy is that you won't hear from the people who are actually concerned about this and refuse to release dangerous things into the world, because they aren't raising a billion dollars.

I'm not arguing for stricter controls -- if anything I think models should be completely uncensored; the law needs to get with the times and severely punish the operators of AI for what their AI does.

What bothers me is that the push for AI safety is really just a ruse for companies like OpenAI to ID you and exercise control over what you do with their product.


Replies

pixl97yesterday at 3:02 PM

>I have no idea what AI safety research at these companies is actually doing.

If you looked at AI safety before the days of LLMs you'd have realized that AI safety is hard. Like really really hard.

>the operators of AI for what their AI does.

This is like saying that you should punish a company after it dumps plutonium in your yard ruining it for the next million years after everyone warned them it was going to leak. Being reactionary to dangerous events is not an intelligent plan of action.

stevageyesterday at 7:25 AM

Didn't the AI companies scale down or get rid of their safety teams entirely when they realised they could be more profitable without them?

show 1 reply