logoalt Hacker News

roywigginstoday at 1:55 PM3 repliesview on HN

I think my favorite part is that, because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all. That makes very little sense unless you specifically want to make it illegal to not be OpenAI (et al).

Similarly, if a frontier model kills merely 99 people, they aren't covered by this. So go big or go home I guess?


Replies

Topfitoday at 2:03 PM

> unless you specifically want to make it illegal to not be OpenAI [...]

If that is an "unintended" consequence, I am certain OpenAI wouldn't be opposed. Preventing competition whilst keeping any potentially profit risking regulations at bay has been a clear throughline in OAIs lobbying efforts.

JumpCrisscrosstoday at 2:46 PM

> because it only applies to "frontier models", if a smaller model is blamed for such harm, it seemingly doesn't immunize the creators at all

Oof. If you're an Illinois resident, please call your elected and at least ensure they understand this loophole is there. In all likelihood, nobody other than OpenAI's lobbyists have noticed this.

euio757today at 4:51 PM

    > "Frontier model" means an artificial intelligence model that:

    > (1) is trained using greater than 10^26 computational operations, such as integer or floating-point operations; or

    > (2) has a compute cost that exceeds $100,000,000
Such a strange regulation, usually large thresholds like this are made to only apply burdening regulation to very-big-players (if you're spending 100 million on training, you can afford a dedicated team to follow such regulation).

But here it seems to be an anti- competitive move for market entrants who haven't made it into the big league yet...

Sounds like the saga for some players pushing for Biden's EO 14110 but this time at the state level?