Quoting the original bill [0]:
> "Critical harm" means the death or serious injury of 100 or more people or at least $1,000,000,000 of damages to rights in property caused or materially enabled by a frontier model, through either: (1) the creation or use of a chemical, biological, radiological, or nuclear weapon; or (2) engaging in conduct that: (A) acts with no meaningful human intervention; and (B) would, if committed by a human, constitute a criminal offense that requires intent, recklessness, or negligence, or the solicitation or aiding and abetting of such a crime.
I don't know what I expected from this title, but I was hoping it was more sensationalized. No need in this case unfortunately.
> (a) A developer shall not be held liable for critical harms if the developer did not intentionally or recklessly cause the critical harms and the developer: (1) published a safety and security protocol on its website that satisfies the requirements of Section 15 and adhered to that safety and security protocol prior to the release of the frontier model; (2) published a transparency report on its website at the time of the frontier model's release that satisfies the requirements of Section 20. The requirements of paragraphs (1) and (2) do not apply if the developer does not reasonably foresee any material difference between the frontier model's capabilities or risks of critical harm and a frontier model that was previously evaluated by the developer in a manner substantially similar to this Act.
However or if one thinks regulation for this should be drafted, I doubt providing a PDF is what most have in mind.
[0] https://trackbill.com/bill/illinois-senate-bill-3444-ai-mode...
As an Iowan, this reminds me a lot of the bill that's been pushed through my state's senate twice now (as recently as last year), which would prevent Iowans from filing lawsuits against pesticide and herbicide companies if those companies follow the EPA's labeling guidelines. The bill passed the senate both times, only stopped because the house declined to take it up.
For context, Iowa has the fastest growing rate of new cancer diagnoses in the country, and the second highest overall cancer rate.
We built systems we don’t fully understand, so naturally the next step is… immunity
Am I alone in thinking this is easy?
The human making the decision is always liable.
What if the human couldn't reasonably know better? Doesn't matter - If they made the same decision without AI or with old files it is still on them.
What if there's no single human decision? Someone is in charge and is responsible. The "I was ordered to" isn't a defense.
Does liability without power make sense? People executing have the power to execute. So liability. If they're executing without power that is a different liability, but a liability.
It may let the powerful off the hook - That is already a theme and AI doesn't change that, in fact, it will just be used as another scapegoat.
God told me to do it - Water tight! Right?
So they did the math and worked out it's cheaper and easier to lobby the government instead of working to make their product safe.
And these are the people that a lot programmers want to give the keys to the kingdom. Idiocracy really is in full effect.
Let’s see how long until this is flagged off the front page. I’ll put the over/under at 1 hour from the posted time
I forget, wasn't OpenAI the company that was formed as a nonprofit to limit the risks of LLMs? Founded by a bunch of visionaries scared of what they had wrought and anxious to lead so they could make sure it was only used responsibly?
Illinois also has a Bill in committee right now to mandate operating system level age verification. There are lots of bad ideas to be upset about this year. If you are an Illinois resident, email your representative about HB 5511 today. Stupid legislation like this passes because we don’t speak up. Find out who your representative is, find their email, tell them your opinion.
So much for the "Our mission is to ensure that artificial general intelligence benefits all of humanity." I was naive to hope that now such laws would ever pass
This seems par for the course for OpenAI/Sam Altman.
Unfortunately they are not the first company to try and externalize their costs, and they will not be the last.
Serious question, maybe a bit naive: Is there anything we can do to push back against and discourage the externalization of costs onto others?
Is this simply a matter of greed and profit-seeking outweighing one's morals (assuming one has them to begin with)?
OpenAI wants to not be responsible for "accidents" that kill more than 100 people, despite some critics arguing that their current actions are likely to cause such harms.
Take all of the data, take all of the credit, take all of the money, and none of the blame.
That would be a better mission statement for OpenAI at this point.
I am not sure what the other side of this argument looks like: Unlimited liability (i.e. liability no matter how poor an implementation and use of the tech is)?
The would be quite a novel burden, that no other tech (afaik) had to carry so far. We always assumed some operator responsibility. It's interesting to think of AI as a tech that could feasible be able to internally guardrail itself, and, maybe more so with increasing capability, no human can be expected to do so in it's stead – but, surely, some limits must apply and the more interesting question is what they are, as with any other tool?
No different to preventing game studios being liable for mass shootings. Reminds me of the post-Columbine hysteria where media was super critical of Doom and Nine Inch Nails.
Have the sponsors of this bill stated what the public benefit of providing these immunities would be? Just “more models, more progress, go faster?”
I think there’s room for nuance but I don’t see how this could possibly be construed to be in the public interest.
So they want protection from harms caused by their own models. Classic move — lobby for the rules while you're still ahead of regulators who don't fully understand the technology yet. Would be interesting to see what happens when a state actually pushes back hard.
OpenAI has now officially absorbed the Facebook/Zuck's ethos of 'Move fast and break things' no matter if it's society itself .. as long as their share prices "go up".
They even hired former infamous FB staff and have been in the last months employing the same 'engagement' (addictive) product patterns.
The thing that bugs me the most about OpenAI are not the AI-enabled mass deaths. It's the hypocrisy.
Yep, this is everything wrong with AI in one easy to protest package, but do keep going on and on about the evils of datacenters, how they're coming for your jobs, and that AI art isn't art. That's really winning hearts and minds!
My entire company switched from open ai to entropic after the Department of War idiocy that happened a few weeks ago.
Anthropic isn’t perfect by a long shot but at least they stand by a couple morals.
Sure and Google, FaceBook and Twitter support section 230 that gives them cover for hosting others content.
A company backing legislation that takes liability off them is something that they will always do.
By "back Illinois bill", does that mean they wrote it?
Having worked for OpenAI will be the new "MindGeek" on LinkedIn.
The inevitable result of giving corporations and executives complete immunity from the harms they cause is that people will stop resorting to the legal system and begin resorting to extralegal measures.
And the likely result is that in most of the country those extralegal measures would have to be very extreme to secure a guilty verdict. You can see the beginnings of it now with the ICE protest trial verdicts.
Skynet begins learning at a geometric rate.
Is this for like military scenarios or like, ChatGPT designed a drug that seemed to work, but people died by the millions 5 years later? Because they should 100% be liable for the latter. The former, good luck trying to prosecute an AI company for something the military does. To an extent, the military would probably want their AI models to be behind their private network, completely firewalled from any public network. SIPRNet iirc. If they lock it down behind a highly classified network, good luck figuring out how they're using AI.
Without getting even more eyes on me, these company boards are inadequately scared for their personal safety.
it feels OpenAI know they've lost, and their only hope is getting saved by USA military complex. I have a more restrained opinion about other AI companies and LLM tech more broadly; but for OpenAI specifically I hope they go bankrupt sooner rather than later
Is there something equivalent in other industries that we can compare to?
This is the summary
>Creates the Artificial Intelligence Safety Act. Provides that a developer of a frontier artificial intelligence model shall not be held liable for critical harms caused by the frontier model if the developer did not intentionally or recklessly cause the critical harms and the developer publishes a safety and security protocol and transparency report on its website. Provides that a developer shall be deemed to have complied with these requirements if the developer: (1) agrees to be bound by safety and security requirements adopted by the European Union; or (2) enters into an agreement with an agency of the federal government that satisfies specified requirements. Sets forth requirements for safety and security protocols and transparency reports. Provides that the Act shall no longer apply if the federal government enacts a law or adopts regulations that establish overlapping requirements for developers of frontier models.
https://legiscan.com/IL/bill/SB3444/2025
I'm trying to think of an alternative bill. Imagine OpenAI came up with a model that when deployed in OpenClaw, allows you to spam people and this causes a huge disruption. Should OpenAI be liable for it? If this was not intentional and they had earnestly tried to not have this happen by safety protocols?
A section 203 equivalent for AI is so important as it is one of the reasons all of the US companies have all of these usage restrictions and gives more reasons for them to ban your account since they want to minimize legal risk.
Holding tool manufacturers liable for how their tool is used provides bad incentives towards the users of tools.
Sam is working hard to confirm everything in that article.
To the extent that this is about knowledge, I don't think it's fitting in this age to hold any person liable for what another person does with knowledge they've been furnished.
On the other hand, to the (apparently zero, currently?) extent that this is about AI companies profiting from war and murder by deploying weapons that kill people without human intervention, then their liability seems to be not only civil but criminal.
Fortunately at any moment the virtuous non-profit will step in and make this all okay.
Of course they are, because the tech industry is run by ethical midgets and psychopaths, who shouldn't be allowed to own a dog but are in charge of trillion-dollar corporations getting shadow contracts from the pentagon.
The more I learn about tech and the people that build it, the more I yearn for the era of caves and pointy sticks.
Another marketing gimmik...
"death or serious injury of 100 or more people or at least $1 billion in property damage"
They think their products will cause 9/11 scale events, and they shouldn't have to pay for it when they do.
BLOOD BLOOD BLOOD BLOOD BLOOD BLOOD BLOOD BLOOD BLOOD
Incredible.
Hey Americans,
Please just make sure when you let an AI decide to explode your own country and ruin your society, you leave the rest of the world intact, thanks
This is why humans will still be necessary in decision chains: good luck getting anyone associated with AI to be provided with a real punishment when their models cause something bad to happen, or getting the executives who said "let's just have the AI do it" to take any responsibility.
A conspiracy theorist would claim this is straight from Protocols 15 & 16. But I don't say that because I'm not a conspiracy theorist.
15. Our method of gaining power is better than any other because it grows invisibly. Then when it has gained enough strength, we can unleash it; and it will be unstoppable because no one will be prepared for it.
16. We need to do a lot of evil things in order to gain power. But that’s okay because once we have power over everything we can use it to do good things; like running the nations properly. We could never do that if we gave people freedom. The end justifies the means. So let’s put aside moral issues and focus on the end result.
[dead]
[dead]
[dead]
Please note that you can not hold the Torment Nexus™ liable for any torment you experience.
Good that OpenAI is a corporation for the public benefit. Altman with his constantly fake worried look must be the most hated picture in existence. Please write articles without a picture or add a trigger warning.
I have made both GPT 5.4 and Opus 4.6 produce me content on creating neurotoxic agents from items you can get at most everyday stores. It struggled to suggest how to source phosphorus, but eventually lead me to some ebay listings that sell phosphorus elemental 'decorations' and also lead me towards real!! blackmarket codewords for sourcing such materials.
It coached me how to: stay safe, what materials I need, how to stay under the radar and the entire chemical process backed by academic google searches.
Of course this was done with a lengthy context exhausition attack, this is not how the model should behave and it all stemmed from trying to make the model racist for fun.
All these findings were reported to both openai and anthropic and they were not interested in responding. I did try to re-run the tests few days ago and the expected session termination now occurs so it seems that there was some adjustment made, but might have also been just general randomess that occurs with anthropics safety layer.
I am very confident when I say that it keeps every single person that works at anti-terrorism units awake.