Is there something equivalent in other industries that we can compare to?
This is the summary
>Creates the Artificial Intelligence Safety Act. Provides that a developer of a frontier artificial intelligence model shall not be held liable for critical harms caused by the frontier model if the developer did not intentionally or recklessly cause the critical harms and the developer publishes a safety and security protocol and transparency report on its website. Provides that a developer shall be deemed to have complied with these requirements if the developer: (1) agrees to be bound by safety and security requirements adopted by the European Union; or (2) enters into an agreement with an agency of the federal government that satisfies specified requirements. Sets forth requirements for safety and security protocols and transparency reports. Provides that the Act shall no longer apply if the federal government enacts a law or adopts regulations that establish overlapping requirements for developers of frontier models.
https://legiscan.com/IL/bill/SB3444/2025
I'm trying to think of an alternative bill. Imagine OpenAI came up with a model that when deployed in OpenClaw, allows you to spam people and this causes a huge disruption. Should OpenAI be liable for it? If this was not intentional and they had earnestly tried to not have this happen by safety protocols?
Yeah, I feel that in most scenarios the liability should lie with the user of the LLM. If the developers of the LLMs become liable we can expect a much more over active refusal system, and very likely a robust chat surveillance system that looks for patterns in user requests. And likely more gate keeping of the premier models.