Pensive assigns each AI-generated grade a confidence level—high, medium, or low. Instructors can use these confidence scores to customize the level of oversight they apply to AI-generated results. In the recent paper we published, we show that grades that are marked as high confidence has over 95% of average accuracy, calculated over 300K questions graded on Pensive.
For lower-stakes assignments like homework, instructors may choose to rely entirely on the AI grader. For high-stakes assessments such as exams, instructors can limit trust to high-confidence grades—tuned to match human-level accuracy—while manually reviewing lower-confidence results.
To support this flexible oversight, instructors can configure Pensive’s behavior based on confidence levels:
Transcription visibility: Show or hide the AI-generated transcription of student responses.
Autograde visibility: Show or hide the AI-assigned grade.
Automatic review: Require manual confirmation of AI-assigned grades.
You can set grading assistance per-assignment level, and per-question level. For assignment level setting, you can change the grading assistance in the Settings tab. For a question level setting, you can change the grading assistance in the rubric of the question inside the Rubrics tab.
