That's a fair point. I think the distinction is between software that follows deterministic rules (your 2-week-delay scenario) vs agents that make autonomous decisions based on learned patterns. With traditional software, intent is clear and traceable. With AI agents, the operator may genuinely not know what the agent will do in novel situations. Doesn't absolve responsibility — but it does make the liability chain more complex. We probably need new frameworks that account for this, similar to how product liability evolved for physical goods.