logoalt Hacker News

Strilanctoday at 8:18 AM1 replyview on HN

For each chick they do 24 trials divided into 4 blocks with retraining on the ambiguous shape and actual rewards after each block. During the actual tests they didn't give rewards. In figure 1 they show the data bucketed by trial index. It's a bit surprising it doesn't show any apparent effect vs trial number, e.g. the first trial after retraining being slightly different.

I have to admit I'm super skeptical there's not some stupid mistake here. Definitely thought provoking. But I wish they'd kept iteratively removing elements until the correlation stopped happening, so they could nail down causation more precisely.


Replies

rubidiumtoday at 7:47 PM

I do agree my skepticism level rises extremely high in any experimental psychology experiment. There’s just so many ways to bias results, in addition to “do enough experiments and one of them will get a statistically unlikely result” problem.

This group does a lot like this https://www.dpg.unipd.it/en/compcog/publications … so that’s tempting to think they keep trying things until something odd happens (kind of like physicists who look for 5th forces… eventually they find something odd but often it’s just an experimental issue they need to understand further).