Frequently Asked Questions (FAQ)
Last updated: 3 June 2024
Please read our FAQ below. If you still have questions about Good Judgment Open (GJ Open/GJO), email us at [email protected].
GJ Open is a crowd-forecasting site where you can hone your forecasting skills, learn about the world, and engage with other forecasters. On GJ Open, you can make probabilistic forecasts about the likelihood of future events and learn how accurate you were and how your accuracy compares with the crowd. Unlike prediction markets and other forecasting sites, you can share your reasoning with other forecasters to challenge your assumptions.
GJ Open taps into the Wisdom of the Crowd. We believe in the wisdom of the crowd and hope to use that wisdom to better understand, and predict, the complex and ever evolving world that we live in.
GJ Open was born out of the Good Judgment Project, a multi-year research project which showed that the wisdom of the crowd could be applied to forecasting. Good Judgment Inc was founded to bring the science of forecasting to the public. GJ Open is designed for anyone and everyone to improve their forecasting skills and is not itself a scientific research project.
If you're new to forecasting, we encourage you to watch a short video about probability forecasting.
When you're ready to begin, look at our active questions and start forecasting!
3. How do I compete against other forecasters?
Competitions on GJ Open are called challenges. Challenges are collections of questions organized by a theme or topic. Each challenge has its own leaderboard, which ranks forecasters by comparative accuracy against the forecasters participating in a specific challenge.
4. How are my forecasts scored for accuracy?
We encourage all forecasters to watch our short video about scoring at http://goodjudgment.io/Training/KeepingScore/index.html
We report three different numbers to quantify your forecasting accuracy and compare it to other users on the site: Brier Score, Median Score, and Relative Brier Score. Lower scores always indicate better accuracy, like in golf. Our primary measure of accuracy is called the Relative Brier Score (formerly known as Accuracy Score), which compares your score to the crowd. Scoring a question doesn’t occur until the outcome is known and the question has resolved.
On your profile page, next to each question you’ll see several columns. Here are more detailed explanations of each:
Brier Score: The Brier score was originally proposed to quantify the accuracy of weather forecasts, but can be used to describe the accuracy of any probabilistic forecast. Roughly, the Brier score indicates how far away from the truth your forecast was.
The Brier score is the squared error of a probabilistic forecast. To calculate it, we divide your forecast by 100 so that your probabilities range between 0 (0%) and 1 (100%). Then, we code reality as either 0 (if the event did not happen) or 1 (if the event did happen). For each answer option, we take the difference between your forecast and the correct answer, square the differences, and add them all together. For a yes/no question where you forecasted 70% and the event happened, your score would be (1 – 0.7)2 + (0 – 0.3)2 = 0.18. For a question with three possible outcomes (A, B, C) where you forecasted A = 60%, B = 10%, C = 30% and A occurred, your score would be (1 – 0.6)2 + (0 – 0.1)2 + (0 – 0.3)2 = 0.26. The best (lowest) possible Brier score is 0, and the worst (highest) possible Brier score is 2.
To determine your accuracy over the lifetime of a question, we calculate a Brier score for every day on which you had an active forecast, then take the average of those daily Brier scores and report it on your profile page. On days before you make your first forecast on a question, you do not receive a Brier score. Once you make a forecast on a question, we carry that forecast forward each day until you update it by submitting a new forecast.
The Brier Score listed in large font near the top of your profile page is the average of all of your questions’ Brier scores.
Median Score: The Median Score is simply the median of all Brier scores from all users with an active forecast on a question (in other words, forecasts made on or before that day). As with your Brier Score, we calculate a Median Score for each day that a question is open, and the Median Score reported on the profile page is the average median score for those days when you had an active forecast. We also report the average across all questions on which you made forecasts (in parentheses under your overall Brier Score).
Relative Brier Score: The Relative Brier Score (formerly known as Accuracy Score) is how we quantify your accuracy as compared to the crowd. It’s what we use to determine your position on leaderboards for Challenges and individual questions.
To calculate your Relative Brier Score for a single question, we take your average daily Brier score and subtract the average Median daily Brier score of the crowd. Then, we multiply the difference by your Participation Rate, which is the percentage of possible days on which you had an active forecast. This means negative scores indicate you were more accurate than the crowd, and positive scores indicate you were less accurate than the crowd (on average).
For Challenges, we calculate your Relative Brier Score for each question and add them together to calculate your cumulative Relative Brier Score. On questions where you don’t make a forecast, your Relative Brier Score is 0, so you aren’t penalized for skipping a question.
The purpose of the “Recent Consensus, Probability Over Time” graph is not to compare your forecast against the ultimate Median Score, but rather to provide an estimate of the general consensus of forecasters on the question over time. The graph displays the mean of the most recent 40% of the current forecasts from each active forecaster. In other words, it reflects the consensus of the most recent 40% of forecasters, so there is an inherent lag. We’ve found in our experience with GJP that 40% provides a good mix of recent activity and historical perspective for our types of questions. (Please note that prior to 19 August 2022, the graph displayed the median, not the mean, of the most recent 40% of current forecasts. All questions resolved prior to 19 August 2022 continue to display the median of the most recent 40% of forecasts.)
This means that the trend may not change very much when one or even many users make forecasts that differ from the trend. We do this deliberately in order to display a consensus forecast that is not overly-influenced by outlier forecasts but still reflects the most recent wisdom of the crowd.
Your Relative Brier Score is based only on the median (not mean) Brier score of all current forecasts on each day – no matter when they were made. The Relative Brier Score is based on the daily difference between a user’s daily Brier score and the median daily Brier Score of all current forecasts. This means that even if you beat the consensus displayed for a given day, you might not beat the median forecast on that date. The purpose of the graph is not to anchor your forecast against the “number to beat,” but to provide an informative estimate of the general consensus.
6. What is the "ordered categorical scoring rule?"
Some forecasting questions require the assignment of probabilities across answer options that are arranged in a specific order. The most common examples in our forecasting tournament are questions that ask the likelihood that an event will occur during one of three or more date ranges or the likelihood that the value of a quantitative variable (such as the number of refugees or the price of a barrel of oil) will fall within one of three or more quantity ranges.
Our usual Brier scoring rule does not consider the order of the answer options and therefore does not give any credit for “near-misses.” Therefore, the usual rule treats a forecaster whose prediction is “wrong” as a matter of rounding error as being no more accurate than a forecaster whose prediction is off by an order of magnitude.
To address this issue, we have adopted a special “ordered categorical scoring rule” for questions with multiple answer options that are arranged in a sequential order. For more information on how scores are calculated, read this PDF document.
Open questions allow you to suggest new questions, share your opinion on a particular topic, or discuss something without making a forecast. Open questions are never scored for accuracy.
8. Can I withdraw from a question?
There is no way to withdraw from a question or delete a forecast. We do this in order to avoid situations where a forecaster withdraws or deletes their forecast when it becomes clear that they will receive a bad score. Unfollowing a question on which you’ve made a forecast does not affect your score – it only affects only your notifications and where you can find the question on the site. You are strongly encouraged to update your forecasts frequently no matter how wrong you think previous forecasts might be.
9. Can I suggest a question for the site?
Certainly! To suggest a question, you can post your suggestion on the webpage itself.
10. How do I become a better forecaster?
We suggest reviewing the material here, including both free and paid resources, and reading Superforecasting: The Art and Science of Prediction by Philip E. Tetlock and Dan Gardner. Most importantly, PRACTICE on questions here on GJ Open!
11. How do I become a Superforecaster?
In short: elite accuracy, along with quality comments and collegiality. You can read more here.
12. How do I delete my account?
We hope that you enjoy submitting forecasts, engaging with like-minded curious forecasters, and learning more about current events and interesting topics. That said, we understand that you might want to take a break from forecasting. GJ Open is set up to give you as much time as you need: Simply log in when you want to resume your forecasting career. If you still wish to delete your account, you can go to the top right icon and select “EDIT PROFILE” in the dropdown menu. Select “Data Management” on the left and the new page will display a “Delete Account” option.
Etiquette Policy
Last updated: 3 June 2024
1. Do you have a moderation or etiquette policy?
“Be kind, respect others, and stay on topic.” Most forecasters participate in this site to test and improve their forecasting ability, and our goal is to provide a fun, respectful, and constructive place for everyone to do so. We don’t delete comments, usernames, or taglines that follow these guidelines and do not violate our Terms of Service.
Things that are not tolerated include: 1) personal attacks against other users or the site administrators, 2) spamming, recruiting, or soliciting other users, and 3) “crusading” or repeatedly steering discussion away from forecasting, particularly for the purposes of (1) and (2).
By accessing or using GJO you have agreed not to post or otherwise make available any content that Good Judgment may deem to be harmful, threatening, unlawful, defamatory, infringing, abusive, inflammatory, harassing, vulgar, obscene, fraudulent, invasive of privacy or publicity rights, hateful, or racially, ethnically, or otherwise objectionable. You have also agreed not to post or otherwise make available any unsolicited or unauthorized advertising, solicitations, promotional materials, "junk mail," "spam," "chain letters," "pyramid schemes," or any other form of solicitation. For more detail on user conduct, please see our Terms of Service.
If you believe another forecaster is violating this policy, we encourage you to flag their comment by clicking the green “Flag” text below their forecast.
When a forecaster violates this policy, we will take the following actions:
- First infraction: Warning and reminder of the policy, unless an account is used for spamming on the site, in which case we will delete the spam comments and the account used to post them.
- Repeated infractions will be evaluated on a case-by-case basis and may include account suspension and a permanent ban from the site.
- Particularly serious violations of our Terms of Service, including specific threats against individuals such as other forecasters, site administrators, or public figures, will be evaluated on a case-by-case basis and may lead to account suspension and/or a permanent ban from the site without previous warning.
You can read more about our policies on user conduct in our Terms of Service.
You can flag any comments against our Terms of Service for attention by our system administrators. However, please don’t flag comments just because you disagree with them. If you find that you're unable to persuade another forecaster, let your forecast speak for itself. Eventually, scores will show who was right and who was wrong.
Question Clarifications & Resolutions
Last updated: 3 June 2024
This section describes our policies related to forecasting questions. If you're confused about a specific forecasting question, please read our FAQ below. If you still have questions, you can email us at [email protected].
1. What are the evidentiary standards for question resolution?
Unless otherwise specified, the outcome of the forecasting question will be determined using credible, open-source evidence and media reporting (e.g., Reuters, BBC, AP) available as of the conclusion of the question’s open period or if otherwise specified. Some questions will specify that resolution will be determined by specific sources (e.g., UNHCR data, Moody's). However, if there is substantial reason to believe there is an error in data as reported by a source, we reserve the right to conduct a review for plain error (i.e., there appears to be an error in data that is clear and obvious, and that error would materially impact the outcome of a question). For example, a Food and Agriculture Organization of the UN (FAO) publication once reported data for Pakistan that overstated pertinent data by a factor of one thousand. In that instance, we contacted FAO for clarification, after which it corrected its data.
Many important geopolitical events are difficult to anticipate because different scenarios and situations can produce the same result. We strive to carefully balance the need for questions to be objectively resolved without making them unnecessarily narrow. In some cases it is possible to forecast on the larger, more meaningful events, even if it is impractical to specify all the potential ways in which those events can occur. In these cases - and especially when the relevant event has significant implications - forecasting only on one or two well defined, tractable scenarios risks missing important ways in which the event can unfold, making those forecasts considerably less relevant and inconsistent with what has occurred in the real world. For some events, very specific questions can be asked. Other events involve more uncertainty because the ways in which the event can occur are many and varied, even if the event itself is quite clear. We believe that crowd-sourced forecasts can still be useful in predicting these important events, so we ask some questions with less specificity.
Some forecasters will enjoy making predictions on these questions, even though they involve more uncertainty. Others will prefer to stick to the narrower questions that have more specific resolution criteria. The goal of GJ Open is to provide a forum where people can hone their forecasting skills while engaging on the important questions of the day: forecasting will be most rewarding if you chose the questions you are most comfortable with and whose topics are most intriguing to you.
Simply put, corroboration and reputation. Generally, media reports from outlets based in “free” nations (e.g., per Freedom House) are to be trusted above reports from outlets whose civil liberties and/or media freedom are contested. In addition, Western media outlets with large circulation and good reputations are treated as having high credibility. Reports from small, local outlets will be assigned less credibility as a rule, though this will be treated on a case-by-case basis. In other cases, minor sources may be deemed credible with regard to specific issues. For example, Syria’s official news agency (SANA) will not generally be considered credible, but it may suffice as a credible source for official announcements from the Syrian government.
Unless otherwise specified, the question will be resolved based on the most data as first available for the time period referenced in the question.
Unless otherwise specified, for questions that ask about data conveyed by a specific report, we will suspend a question as of the end of the time period to be covered by the expected report. For example, if we ask about the US unemployment rate for June 2024, we will suspend the question at the end of June 2024 and wait for the Bureau of Labor Statistics to release its Employment Situation report in the following month.
In these cases, we make every effort to gather all available information in order to inform a decision, which may result in a question being left open to forecasting even after it appears the outcome of the question may already be known. Ultimately, GJ Open reserves the right to make the final decision, in our sole discretion, regarding the resolution of all questions. In cases of substantial controversy or uncertainty about an outcome or the credibility of a source, we may take various measures, from posting a clarifying note to outright voiding the question.
6. How and when are questions closed?
Questions are closed based on when events have occurred, rather than on when events are reported in the media. Although we know that forecasters must consider the likelihood of open-source reporting when making predictions, our scoring remains consistent by focusing on when events themselves occur. This sometimes means that questions have retroactive closing dates, but by focusing on actual events rather than media coverage of those events our forecasts will be more relevant to decision makers.
The official closing date of the question will be listed in question description with the resolution criteria after a question has closed. Forecasts made through the end of the calendar day (Pacific Time) prior to the official closing will be scored for accuracy.
7. It appears the event mentioned in the question has occurred, but the question is still open. Why?
If you believe that a question should close, please send us a resolution request at [email protected] or clicking the blue gear icon in the top right of the question description section and selecting “SUGGEST RESOLUTION.” If you are unsure of whether an event has closed the question, you can either use the same process and select “REQUEST CLARIFICATION or email us at [email protected].
Sometimes, we keep questions open after the outcome appears to be known to confirm the outcome. In these situations, the question will be closed retroactively and the official closing date listed in the question description section. Only forecasts made through the calendar day prior (Pacific Time) to the official closing date will be counted when we calculate scores.
Time Zone: Unless otherwise specified, we will use Pacific Time (PT) to evaluate deadlines (e.g., if the forecasting question asks about an event occurring before 1 January 2025, the latest a qualifying event can occur is 23:59:59 PT (11:59:59 p.m.) on 31 December 2024).
“As of” vs “Before”: When a question asks about a situation "as of" a certain date, the question and all answer options will generally remain open until the specified closing date. The goal of these questions is to gauge how a situation will look at a certain point in time. For questions about events occurring "before" a certain date, the question and/or individual answer options will generally close as soon as an event has transpired prior to the specified closing date.
No, only events that occur after the question’s launch date count towards the resolution of the question. There are exceptions when a question asks for forecasts for a specific time period (e.g., "What will be the total number of X sold during 2016?” that may have begun prior to the release of the question). These questions will clearly state the timeframe that the forecasters needs to consider and may indicate pertinent events that began before the launch of the forecasting question.
Unless otherwise specified, forecasting questions will be resolved when relevant data are initially released, and subsequent revisions to the data will not affect the question’s resolution.
11. Will rounding be used in questions where there are numeric thresholds?
Unless otherwise specified, the precision of our threshold will match the named source’s precision, so rounding will not be an issue. When this does not hold, rounding will not be used (e.g., 1.011 will be considered greater than 1.01).
If the question itself specifies a deadline (e.g., Will North Korea conduct a missile test before 1 January 2025?), then events that occur after the deadline will not count. Some questions do not explicitly state a specific deadline, in which cases we set a likely closing date. If the outcome is unknown by the likely closing date (e.g., parties take additional time to negotiate before naming a Prime Minister), the end date can, and will, be pushed out. In all cases, the question will be closed as of the end of the calendar day (Pacific Time) prior to the event occurring. See FAQs 6 through 8 above for more details on deadlines.
13. Do you close answer options that become obsolete before the question officially ends?
No. We do, however, encourage you to update your forecasts when answer options become obsolete.
Military operations can be incredibly complex. Because the details associated with these events are hard to anticipate beforehand, GJ Open tries to use simple, natural language to communicate the big-picture goal of the question (e.g., whether troops will be deployed, whether a ground offensive will be launched, whether key cities will be taken). Because of the complexity of military operations, these questions have a higher degree of uncertainty than other types of questions. In some cases, there may even be a grey-zone period during the course of an event unfolding in which the outcome is unclear. Having simple resolution criteria gives GJ Open's team flexibility to evaluate all the details so that resolution decisions accurately reflect what is happening on the ground. Please keep this uncertainty in mind when deciding whether to forecast on these questions and when making your forecasts.
We do not generally issue guidance on how we will score hypothetical scenarios because we want to retain flexibility to evaluate all of the relevant details of an event before making a decision. Some questions involve more inherent uncertainty. In those cases, brainstorming potential scenarios and thinking about the probabilities of those scenarios unfolding can be a useful analytics technique.
Sometimes, we ask questions where the answer might not be known before the challenge itself ends. Generally, we'll handle these questions in one of two ways:
1. If the answer to the question is known before the challenge ends (usually, because the event occurred), we'll score the question and use it to rank forecasters on the challenge leaderboard.
2. If the answer to the question is unknown when the challenge ends (usually, the event has not yet happened, but the question has not ended even though the challenge has), we will not score the question as part of the challenge and will roll the question over into the next version of the challenge or a similar challenge.
17. I would like a clarification. What should I do?
First, consult the GJ Open crowd. Other forecasters on the site are your best resource. This can be extremely helpful in both understanding the question and in forecasting the question. Second, review the above FAQs addresses specific issues that have arisen in the past. Third, submit an official clarification request by emailing [email protected]or requesting a clarification by clicking on the "ask us for help" hyperlink at the bottom of the question description section for the relevant question.
GJ Open takes clarification requests seriously and will respond to your request after taking time to investigate the issue, and if necessary, consult with the question’s sponsor. If an official clarification is released for all forecasters, it will be added to the question description section with the date of release.
When a question asks about an 'announcement', legal actions subsequent to the announcement occurrence (e.g., actions by a court) do not affect the resolution of a question. When a question asks about a law, policy, order, or similar act of a legal nature going into effect, subsequent legal actions (including, but not limited to, actions by a court) will only affect the resolution of the question if the subsequent legal actions prevent the act of a legal nature from taking effect. For instance, if a question asks if a piece of legislation will become law, that legislation becoming law would close the question irrespective of an injunction being subsequently granted by a court to prevent further enforcement of that law. However, if a question asks about a policy change and an executive order is signed which makes that change but an injunction is granted before the change takes effect, the question would not close.
No. Examples are meant to be illustrative and to help forecasters get a sense of what types of things will count, as well as provide context for the question to forecasters who may not be familiar with the topic. Unless otherwise stated, an "e.g." section or other list of examples should not be considered exhaustive.
Badges
Last updated: November 10, 2020
1. What are badges, and how do they work?
Badges are public awards you can earn by participating in GJ Open. We award new badges (and expire old ones) on the first of each month. The awarding of badges that require a minimum level of activity to earn will be based on your activity from the preceding calendar month. Badges do not affect how we score your forecasts.
2. What badges can I earn, and how do I earn them?
Here is a list of all available badges on the site, and the rules for earning and keeping them:
Frequent Forecaster: For those who are dedicated to making forecasts and updating their beliefs. You will earn this badge if you made at least 10 forecasts in the previous calendar month. This badge will expire at the end of every month and be re-awarded based on the previous month's activity, so if you want to keep it, you have to stay active!
Influencer: The best forecasters don't just put their reputation on the line, but explain how they chose their probabilities. You will earn this badge if you made at least 2 comments (as rationales accompanying forecasts or replies to other forecasters) with at least 2 upvotes each in the previous calendar month. This badge will expire at the end of every month and be re-awarded based on the previous month's activity, so if you want to keep it, keep making high-quality comments!
Economist MVP (Most Valuable Predictor): You proved your value by out-forecasting the competition in The Economist's World in Challenge. This permanent badge was awarded to the top 20% (by Relative Brier Score) of forecasters who answered at least 10 questions in any of the yearly iterations of the challenge.
Rationale badges: Show your work! Each rationale you include with a forecast counts toward your running total of rationales. You'll earn a badge for a total of 25, 50, and 100 rationales. Each month, we'll check again to see if you've earned the next level.
Wharton Signal Seeker: You proved your value by out-forecasting the competition in a Vehicle Innovations Challenge. This permanent badge was awarded to the top 20% (by Relative Brier Score) of forecasters who answered at least 10 questions in any of the yearly iterations of the challenge since 2017.
Forecaster: You earn this permanent badge by forecasting on 50, 100, 250, or 500 cumulative questions. As you reach a new threshold, your badge is upgraded.
Updater: You earn this permanent badge by updating your forecasts 100, 200, 500, or 1000 times. As you reach a new threshold, your badge is upgraded.
Upvoted: You earn this permanent badge by receiving 50, 100, 250, 500, or 1000 cumulative upvotes. As you reach a new threshold, your badge is upgraded.
Supporter: You earn this permanent badge by giving 50, 100, 250, 500, or 1000 upvotes. As you reach a new threshold, your badge is upgraded.
Top Forecaster: You earn this permanent badge by being in the top 10 in 25, 50, 100, or 250 questions (including ties at the 10th spot). As you reach a new threshold, your badge is upgraded. Starting in June 2022, this Top Forecaster badge was modified to include an additional requirement. Along with placing in the top 10 for 25/50/100/250 questions, a recipient's cumulative Relative Brier Score should be less than 0. (Recipients of the older Top Forecaster badges will keep those badges, but they will not be updated in the future.)
Profile: You earn this permanent badge by answering at least 5 profile questions.
Followed: You earn this badge by having 20 followers or more.
Follower: You earn this badge by following 20 users or more.
Researcher: You earn this permanent badge by posting 50, 100, 250, 500, or 1000 rationales with links. As you reach a new threshold, your badge is upgraded.
Please note that you can choose which badges you want to feature most prominently on your profile by going to "Edit Profile" and then "Featured Badges".
3. How do I choose which badges to display below my username?
You can choose which badges will be displayed on your profile page, under your tagline, by editing your profile, navigating to Badges on the left side, and checking the Featured? box next to the badges you've earned. All badges that you've earned will be displayed on the Badges tab of your profile, regardless of whether you choose to feature them.