Back in July, I wrote about a study that looked at ‘expert’ predictions and found them wanting. The authors asked both social scientists and laymen to predict the size and direction of social change in the U.S. over a 6-month period. Overall, the former group did no better than the latter – they were slightly more accurate in some domains, and slightly less accurate in others.
A new study (which hasn’t yet been peer-reviewed) carried out a similar exercise, and reached roughly the same conclusion.
Igor Grossman and colleagues invited social scientists to participate in two forecasting tournaments that would take place between May 2020 and April 2021 – the second six months after the first. Participants entered in teams, and were asked to forecast social change in 12 different domains.
All teams were given several years worth of historical data for each domain, which they could use to hone their forecasts. They were also given feedback at the six month mark (i.e., just prior to the second tournament).
The researchers judged teams’ predictions against two alternative benchmarks: the average forecasts from a sample of laymen; and the best-performing of three simple models (a historical average, a linear trend, and a random walk). Recall that another recent study found social scientist can’t predict better than simple models.
Grossman and colleagues’ main result is shown in the chart below. Each coloured symbol shows the average forecasting error for ‘experts’, laymen and simple models, respectively (the further to the right, greater the error and the less accurate the forecast).
Read More: Another Study Finds Social Scientists Are No Better at Forecasting Than Laymen