Skip to main content

Grading paper homeworks & exams

How to use Pensive AI grader to grade paper-based assignments

Yoonseok Yang avatar
Written by Yoonseok Yang
Updated today

Question List Page

When you click the Grading tab from the left side bar, you will be able to see the list of questions. Each question has a question type, name, a play button, autograde progress, and review progress.

  • Play button: when you press the play button, it will trigger the autograder to start grading submissions. We recommend you to press this button after you finalize the rubrics. You can stop the autograder anytime by clicking the button again.

  • Autograde progress: shows how many submissions have been graded as a percentage

  • Review progress: shows how many submissions have been reviewed as a percentage

Step 1. Press the play button to trigger the autograder

When you press the play button, the autograder will start running. If you click the refresh icon at the top bar, you will be able to see the current autograde progress.


Step 2. Review Grades

In Pensive, we have a separate review progress from grading progress in case instructors want to review AI-generated grades manually.

Since Pensive AI Grader grades all submissions by default, we have a separate review progress bar before you release the grades.

This is a sample view of what a review page looks like.

AI Transcription

Under the student solution (above student solution for non-templated assignments), Pensive shows a clean transcription of the student's work generated by AI.

Each transcription is assigned a confidence level—high or low—based on model certainty. Transcription confidence is not shown in the review page, but you can hide low confidence transcriptions in the settings tab.

AI grade

On the right, you can see the selection of rubrics from the AI grader and the score.

AI Confidence

Pensive assigns each AI-generated grade a confidence level—high, medium, or low. Instructors can use these confidence scores to customize the level of oversight they apply to AI-generated results.

For lower-stakes assignments like homework, instructors may choose to rely entirely on the AI grader. For high-stakes assessments such as exams, instructors can limit trust to high-confidence grades—tuned to match human-level accuracy—while manually reviewing lower-confidence results.

To support this flexible oversight, instructors can configure Pensive’s behavior based on confidence levels. For more information, please refer to the confidence page.

AI Feedback

In large STEM courses, students often receive limited feedback on open-ended questions due to the grading burden. Pensive addresses this by allowing instructors to generate individualized feedback after grading.

Once grading is complete, the system can generate comments that explain the student's mistakes, referencing the associated rubric items. Instructors can also provide custom prompts to guide the style, tone, or content of the generated feedback.

AI Summary

To improve grading transparency and help instructors understand AI decisions, Pensive generates a concise summary of each student response. These summaries highlight the key reasoning behind the selected rubric items and flag major errors, enabling instructors to verify the AI’s work more efficiently.

Step 3. Mark as Reviewed

When you are done reviewing, you can click Mark as reviewed at the bottom corner (keyboard shortcut R) to finish reviewing and move on to the next submission.

You can always come back and change the grade.

Did this answer your question?