Scenario 2 makes the assumption that no technological development can happen without AI, which seems like a stretch to me. Honestly, the worst scenario i can think of is 40ish years of AI assisted development followed by a technological crash due to there being no competent engineers left to fix the slop.
I didn't say all technological development would be halted, just that tech "in many fields" would have to be stalled for safety (AI development, algorithm development that would reduce the cost of training models, etc)> Naturally if AI is considered an existential threat there would be a huge safety radius for things that would allow bad-actors to train AI models.