It's "surprising" because there's supposed to be this thing called "alignment" which in general is supposed to make AIs not do such things.
If the headline were the less interesting "AIs never recommend nuclear strikes in war games", people on HN would probably ask "how is that surprising, that's what alignment is supposed to be?"
In any case, we're extremely lucky that there's about 0.001% probability of LLMs being a path to AGI.
In any case, we're extremely lucky that there's about 0.001% probability of LLMs being a path to AGI.
It's pretty safe to say that AGI requires a lot more than picking plausible words using probability.
The danger is the number of people in positions of leadership who don't get this. People who are easily seduced by the "fake intelligence" of LLMs.