Skip to main content

Calibrate AI Grader

Yoonseok Yang avatar
Written by Yoonseok Yang
Updated today

After the initial rubric generation, Pensive supports a calibration process to fine-tune the rubric and grading behavior based on instructor input. This is particularly useful for open-ended questions, where rubric interpretation can be subjective. For example, instructors may differ in how they interpret vague labels such as “insufficient explanation” (see Figure below).

To calibrate the AI grader, instructors review a small number of AI-graded examples and correct any mistakes by adjusting the rubric selections. Pensive then uses these corrections to refine both the rubric interpretation and the grading logic.

During calibration, the system synthesizes grading wisdoms—detailed, explainable instructions derived from discrepancies between the AI’s initial grades and the instructor's corrections. These wisdoms help the AI capture grading nuances more accurately and are editable by instructors for greater control.

Here's how you can calibrate the AI grader:

Did this answer your question?