The introduction of the 9-1 grading system has shown, beyond doubt, how reliant teachers have become on grades and data to support their teaching. There is an understandable need for us to be accountable and reliable when reporting predictions, and the bottom line is that you cannot do this as reliably as you did with the years of experience in marking the old GCSE. A sad truth may also be apparent in the desperation for a reliable grading system: are we guilty of teaching to the test more than we realised?

Nevertheless, if we must find a system to predict, then this system should be based on the known facts and variables of our situation. We don't know a future paper's percentages required for each grade. This varies from paper to paper, from subject to subject and from year to year. Predicting using old papers' grade boundaries is deeply flawed.

You do know the numbers of pupils who got a C or better, or an A or better on your old GCSE. You know if your cohort is similar to last year's cohort. You know the numbers achieving a 4 or better should be roughly equal to a C or better, all other things being equal. The same is true for the A grade and 7 grade.

Set a test that is as close as possible to being real, by using your specimen papers. Sort your results, best to worst, and apply the grades consistent with your school's usual spread, using the anchoring points above as a starting point.

The presentation below explains in more detail how to get as close as possible to a reliable set of predictions for your cohort. You will never get a 100% reliable system because the exam grade boundaries don't exist yet! My method merely imitates the method that I know the exam boards will use to apply the real grades to the real tests, when the exams eventually take place.
Ben Creasey,
Nov 18, 2017, 5:48 AM
Ben Creasey,
Nov 17, 2017, 11:41 AM