What is not generally understood is that these models don’t predict egg prices or inflation in Italy.
They decompose a time series into trends, seasonality and residuals. That’s what they are actually modelling.
They cannot predict wars in the Middle East influencing inflation unless there is a seasonal pattern(s).
That's what traditional time-series modelling does. This is a foundational model, which means it's just a neural network trained on lots of time series. (So maybe OP's question still stands? But it's the same question as "how can LLMs be good at so many different kinds of conversations?")
Wars in the middle east seem to have increasingly regular patterns tied to stock market opening hours, unfortunately.
What makes these models different from models used for e.g. audio?
Or other low-dimensional time domain signals?
I am not familiar with time series models, but judging from your answer, it would be necessary to feed long time series into this model for it to detect trends. What is a token here? Can it, for the lack of a better example, take in all intraday movements of a stock for a day, a week, a month, etc?
Do these models predict on just a single time series then?
it is far more useful for predictions to look for correlations between time series. This is far more complex than looking for correlations in general because most time series trend up or down and therefore correlate.
ARIMA and ARMA models
ar(k) stuff, sure. that's old news. i would expect the newfangled stuff to be good at 0-shot learning of pre-event signatures spread across multiple series, at a minimum.
It is the Middle East. Wars are always in season. And supply is more than the demand.
The main issue is that people do use them to predict bitcoin prices intraday and that sort of things.
[dead]
> They cannot predict wars in the Middle East influencing inflation unless there is a seasonal pattern(s).
well...