>The rest of us needn't give a shit if we can hold you accountable for your software's consequences, AI or no.
See this is the fun thing about liability, we tend to attempt to limit scenarios were people can cause near unlimited damage when they have very limited assets in the first place. Hence why things like asymmetric warfare is so expensive to attempt to prevent.
But hey, have fun going after some teenager with 3 dollars to their name after they cause a billion dollars in damages.
Well, that unlimited damage scenario is one that I'd need to see a successful demonstration of before I'll worry about it. Like, sure, if we end up building some computer program that allows a bored kid to do real damage then I'll eat my words but we're nowhere near there today, and for all anyone actually knows we may never get there except in fiction.
Not unlike nuclear weapons, this space is fairly self-regulating in that there's very, very high financial bar to clear. To train an AI model you need to have many datacenters full of billions of dollars of equipment, thousands of people to operate it, and a crack team of the worlds leading experts running the show. Not quite the scale of the Manhattan Project, but definitely not something I'll worry about individuals doing anytime soon. And even then there's no hint of a successful test, even from all these large, staffed, funded research efforts. So before I worry about "damages" of any magnitude, let alone billions of dollars worth, I'll need to see these large research labs produce something that can do some damage.
If we get to the point where there's some tangible, nonfiction threat to worry about then it's probably time to worry about "safety". Until then, it's a pretend problem which serves only to make AI seem more capable than it actually is.