How do I interpret results for the MCCQE Part I and MCCQE Part II?

There have been two important changes to the MCCQE Part I and MCCQE Part II.

  1. New reporting scales:
  • The new scale for the MCCQE Part I is 100 to 400 and the new scale for the MCCQE Part II is 50 to 250
  • Scores can be compared to the mean (M) and standard deviation (SD) of total scores using results from when the new exam score range is/was established
    • April 2018 for MCCQE Part I
    • October 2018 for MCCQE Part II


1Based on candidates in April 2018; 2 Based on candidates in October 2018; 3 Based on candidates in April 2015;

4 Prior to October 2018, there was a different blueprint for the MCCQE Part II. The scale ranged from 50 to 950 with a mean of 500 and a standard deviation of 50. The mean and standard deviation were established using all results from the May 2015 session. Typically, subsequent cohorts would be compared to the session where the score scale was established, but the composition of cohorts changed after scale development due to the implementation of capacity limits for the MCCQE Part II. Between October 2015 and May 2018, a more stable cohort composition occurred for the MCCQE Part II, where on average the mean was 588 and the standard deviation was 89.

  1. New pass scores:
  • The new pass score for the MCCQE Part I is 226
  • The pass score for the new MCCQE Part II is 138

Statement of Results and Supplemental Information Report:

Candidates will continue to receive the same two documents in their accounts: The Statement of Results and the Supplemental Information Report.

 Statement of Results:

  • Provides the candidate with their final result, their total score and the pass score
  • The body of the report also provides the M and SD of when the scale was established

Supplemental Information Report

  • Formerly known as the Supplemental Feedback Report
  • Provides the candidate with subscores in graphical format for dimensions of care and physician activities
  • Allows candidates to assess their relative strengths and weakness and compare their subscores with the mean subscores of first-time test takers who passed the exam

How to interpret the results provided in these documents:



  • Small differences in subscores or an overlap between Standard Error of Measurement indicate that the performance in those domains was somewhat similar
  • Overlap between the Standard Error of Measurement for a subscore and the mean score of first-time test takers who passed the exam, signifies that the candidate’s performance is similar to the mean score
  • No overlap suggests that the candidate performed significantly better or significantly worse than first-time test takers who passed

For more information, watch the How to interpret results for the MCCQE Part I and MCCQE Part II ,visit to read our score interpretation guidelines or email



Article is closed for comments.