I can answer that one: none.
The only thing I can think of is massively increased context windows (around 4k for gpt3), but a million context token with degraded performance when full is not what I'd qualify as resolved.