The buckets we’re given here are pretty narrow: Pennsylvania alone for example has 19 votes. So either of the middle bucket’s widths is less than two Pennsylvanias. And of course, we should expect the results of individual states to be correlated positively with one another most of the time, which makes the distribution for electoral college votes wider.
For a numerical estimate I looked at some other electoral college forecasts around the web, including FiveThirtyEight (still their estimate for Biden, which they haven’t updated yet), Polymarket and Metaculus. Here are their cumulative distribution functions on the same scale compared to us:
As you can see, the range of the buckets we’re given here is comparatively small. Maybe that anchors people? Anyway, the other forecasts have much heavier-tailed distributions. Here is the code for the graphic. Anyone interested should also be able to run it themselves right on there to get current data from the other three sites: https://colab.research.google.com/drive/1Sq473ytB8KYJtoXp9qjl0nb_zRkW45BU
The script also converts the three other estimates into Good Judgment Open's scale. Here is what they would predict on this question:
To make the numbers comparable among the different sites I’m assuming Trump has a 100% chance of remaining the candidate. So that’s maybe another percent or two for the lowest bucket. Also in aligning buckets for different sources, I’ve made some off-by-one errors in the number of electoral votes that I was too lazy to fix.
I think the Polymarket probably overestimates unlikely outcomes due to liquidity concerns and the favourite-longshot bias. Metaculus has a tool to enter distributions, but by design it’s hard to make nice heavy-tailed distributions with it.
The issue with such sources is the interviewees political orientation.
The answer or forecast you get highly depends on who takes part or who feels inclined to answer. Both Polymarket and 538's results look highly biased and/or outdated. In addition, the real money-markets have the problem of meta-polling: from a large chunk of participants you cannot expect the answer which reflects the most likely outcome, but instead forecasts will reflect how wrong previous rounds of polling were in the past and will show a result that is most desirable to some according to the best (initial) betting quota.
@SE_Meyer Yeah, I definitely agree that both polls and prediction markets have their flaws, and the different sites of course contradict each other to some extent. But I do think they each provide some evidence especially when they broadly agree. And really, being sceptical about polling makes me want to widen my forecast even more: After all, there could be some giant red or blue wave brewing and maybe we just wouldn’t know.
Poly might be somewhat more right-leaning but I think the bigger factor is the $850 limit at PI which forces players who are maxed (or near maxed) to "wash" their shares, which puts them up for sale (sometimes even at a loss) so they can reload later with cheaper shares. These sales stack up on the demand side, slowly pushing the price higher. While this might happen at Poly as well (as players hit their personal limits), the big-dollar players can plow right through it all, something that can't happen at PI.
And everyone else reading this should of course subscribe to his newsletter for all you’re forecasting news (such as now the image above), it's great: https://forecasting.substack.com/
Why do you think you're right? (optional)
The buckets we’re given here are pretty narrow: Pennsylvania alone for example has 19 votes. So either of the middle bucket’s widths is less than two Pennsylvanias. And of course, we should expect the results of individual states to be correlated positively with one another most of the time, which makes the distribution for electoral college votes wider.
For a numerical estimate I looked at some other electoral college forecasts around the web, including FiveThirtyEight (still their estimate for Biden, which they haven’t updated yet), Polymarket and Metaculus. Here are their cumulative distribution functions on the same scale compared to us:
As you can see, the range of the buckets we’re given here is comparatively small. Maybe that anchors people? Anyway, the other forecasts have much heavier-tailed distributions. Here is the code for the graphic. Anyone interested should also be able to run it themselves right on there to get current data from the other three sites: https://colab.research.google.com/drive/1Sq473ytB8KYJtoXp9qjl0nb_zRkW45BU
The script also converts the three other estimates into Good Judgment Open's scale. Here is what they would predict on this question:
538's (still Biden) forecast with GJOpen's bucket:
-235: 34.9%
235-269: 14.4%
269-307: 17.5%
307-: 33.2%
Polymarket's forecast with GJOpen's bucket:
-235: 41.8%
235-269: 16.3%
269-307: 12.5%
307-: 29.4%
Metaculus' crowd forecast with GJOpen's bucket:
-235: 23.8%
235-269: 31.3%
269-307: 26.9%
307-: 17.9%
Why might you be wrong?
To make the numbers comparable among the different sites I’m assuming Trump has a 100% chance of remaining the candidate. So that’s maybe another percent or two for the lowest bucket. Also in aligning buckets for different sources, I’ve made some off-by-one errors in the number of electoral votes that I was too lazy to fix.
I think the Polymarket probably overestimates unlikely outcomes due to liquidity concerns and the favourite-longshot bias. Metaculus has a tool to enter distributions, but by design it’s hard to make nice heavy-tailed distributions with it.
The issue with such sources is the interviewees political orientation.
The answer or forecast you get highly depends on who takes part or who feels inclined to answer. Both Polymarket and 538's results look highly biased and/or outdated. In addition, the real money-markets have the problem of meta-polling: from a large chunk of participants you cannot expect the answer which reflects the most likely outcome, but instead forecasts will reflect how wrong previous rounds of polling were in the past and will show a result that is most desirable to some according to the best (initial) betting quota.
@SE_Meyer Yeah, I definitely agree that both polls and prediction markets have their flaws, and the different sites of course contradict each other to some extent. But I do think they each provide some evidence especially when they broadly agree. And really, being sceptical about polling makes me want to widen my forecast even more: After all, there could be some giant red or blue wave brewing and maybe we just wouldn’t know.
For what it's worth, PredictIt also has a market on the same thing as Polymarket: https://www.predictit.org/markets/detail/8077/What-will-be-the-Electoral-College-margin-in-the-2024-presidential-election I’m not on either, but I hear people saying PredictIt tends to be more Democrat-leaning than Republican-leaning Polymarket with its crypto association. PredictIt’s numbers are pretty similar to Polymarket’s, but then they could just be arbitraged.
@LokiOdinevich My GJOpen career has peaked
And everyone else reading this should of course subscribe to his newsletter for all you’re forecasting news (such as now the image above), it's great: https://forecasting.substack.com/