One thing that really jumps out to me is the lack of a performance gap between the 90-day and 30-day resolution times. If 2-months of new information doesn't lead to materially improved forecast, then to me this seems to strongly reinforce the takeaway that these markets aren't really forecasting, so much as "the oracle is largely saying what other oracles already say, just updated faster." Am I misunderstanding the data here?
edit: I'm also going back to my bayesian theory days and would be super interested to see a deep dive into whether these markets are rationally updating their beliefs in time. My recollection is super vague here, but I recall something like non-transitive belief loops can lead to dutch-books (so like Johnny Punter things that Trump will win an election against Biden, Biden would win against Ross Perot, and Ross Perot would win against Trump). I'd like to know more about whether these kinds of issues are showing up in these markets?
Interesting read. Regarding the relationship between volume and accuracy, there need not be one in limit-order-book markets like Kalshi and Polymarkets. In theory, as long as quotes are accurate and adjust quickly to new information, there is no need (and no incentive) to trade since prices are efficient. This is the case in US equity markets: most price discovery occurs through quote updates, not through trades.
Studying prediction markets is one of my current research areas. In my latest paper (preprint at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6443103), we find that on Polymarkets, markets are, on average, quite accurate and unbiased. We did see a similar non-pattern between trade volume and accuracy, past a certain threshold.
I think a fundamental problem is that the customer of a prediction market is the trader (gambler?), not the public. If you want accurate forecasts, you need sharp traders. If you want sharp traders, you need to pay them a lot. As a platform, the straightforward way to do that is to attract a large number of uninformed gamblers. And ultimately, the accuracy is not determined by volume, but by the fraction of informed and uninformed capital that is trading for idiosyncratic reasons uncorrelated to the "true" probability. Someone has to put in the effort to make the markets accurate, and that someone has to be paid and that money has to come from somewhere.
It sounds like they should be called "indicator markets" rather than "prediction markets", as the data shows they largely just summarize the current knowledge, with little predictive ability.
Given the request about engaging with this specific article:
>Ive thought hard about how to sell prediction markets to consumers. In 2020, I created Google’s current internal prediction market. Since then, I’ve served as the CTO of Metaculus, a non-market-based crowd-forecasting website, and now run FutureSearch, a startup that provides AI forecasters and researchers.
I feel like openly saying you professionally try to make people believe in markets reduced the impact of any further claim.
>Still, there is a benefit to speed. On March 11, 2026, the Financial Times reported that, upon news of Iran War escalation, the Polymarket odds of inflation at or above 2.8% rose to above 90%. This illustrated an immediate domestic impact to US foreign policy, which could influence the public in a way that updates months later from professional economists might not.
I don't understand the idea that this or similar predictions are of any value? "People strongly believe a war will worsen inflation" is information you could get anywhere and not necessarily based on any high quality decision making.
I recently tried to launch a site for friends and family that allowed people to make confidence predictions on various outcomes so they could track their calibration over time. It was like "I'm 84% certain Kansas City will beat Buffalo." I had a lot of fun with it since I'm a nerd about this stuff, and I actually demonstrably improved my calibration. But the only sources I could find for rapid repeatable bets were sports predictions. And I definitely did not want to include money or betting for all the annoying legal reasons. People had fun using it once for March Madness 2025 but traffic really dwindled after that. My conclusion was that the overall subject just wasn't inherently fun enough to do it without money involved, so I made the site dormant.
Getting better calibrated really is worthwhile, I just wish there was more of an appetite to do that without involving money.
Nice article. One small comment, it's very hard to conclude anything about accuracy over time because the comparisons may not be apples to apples. For example if there used to be lots of questions about if it will rain in Boston and now there are lots of questions about if it will rain in Phoenix, it will look like predictions are getting more accurate, but the questions are just getting easier.
> Try it yourself. Pick a topic that is important to you. Try searching Polymarket for probabilities, versus asking Claude about it. I wager you’ll prefer Claude’s take, even if it is less accurate. For one thing, Claude can speak to issues that are not properly resolvable forecasting questions.
I thought this was the very thing we wanted to avoid by creating reputation or money based prediction platforms rewarding statistical accuracy. We already have plenty of pundits speculating inaccurately about vague things they don't know much about.
We don't need AI to get more of that!
Random aside: I distinctly remember getting on a phone call with people from the SEC (US Gov't) with the goal of understanding if I could legally start a prediction market. This was during 2020 or 2021. I recall them saying basically "no way" and that it wouldn't be legal, and would be rife with abuse.
Fun times.
I dove into the prediction markets rabbit hole a number of years back. And I’ve personally seen witnessed scenarios where the wisdom of crowds seems to really work. What I have not really—including in this piece—read is rigorous theory of what makes it effective or not. There are hints here and in the Wisdom of Crowds book but I’ve never read a really comprehensive theory.
Most people don't know, that "prediction markets" are acutally based on an idea by DARPA in 2002, after 9/11/2001.
Good source.
The only complain I have ( not really directed at the article, but.. ) is to put all these theories and somewhat private experiments into the same room as pure gambling schemes turbocharged by "the algorithm" and political corruption.
While far from Heaven's gates, some guy trying to predict the price of corn next year is not in the same plane as those who had the "very original" idea every guy in his early 20s had at some point but never went further because he read some articles about "the law". Like it or not, the laws or the remnants of it were put in place due to the obvious degenerate attitudes and it's consequences gambling was always known for.
And no, it's not a "market", even Uber appears to have some usefulness to offset all the lying, corruption and criminality they had to do in order to become what they are. These ones don't even take you places other than gambler addiction.
End of run, sorry.
Did I just read a claude ad?
[dead]
All: HN has had many threads with generic arguments about how prediction markets are/aren't useless, casinos, social ills, and so on. It would be good to avoid that in this case, because OP is full of specific information and arguments. It deserves a less generic discussion.
It's fine, of course, to be for/against/etc. and have whatever view you have. Just please engage with the specific article. It will make for a less repetitive and (therefore) more interesting thread.