I'm currently working on designing incentive mechanisms for peer grading systems. In large classes (e.g., MOOCS), assignment evaluation is costly as it requires a significant amount of TA/instructor workload. Peer grading is a way of evaluating assignments by asking the students themselves, to grade each other. This means the reliability of these systems highly depends on the quality of the students' reported grades. However, if the students are left on their own, it is likely that they either would not invest effort in grading or decide not to report their evaluations to the system truthfully (e.g., by coordination among themselves). Therefore, it is important to provide the students with appropriate incentives. One way of doing so is to employ a group of mechanisms called peer prediction. The idea in peer prediction is to reward each grader, based on a measure for agreement of her reported grade with those reported by the other graders who graded the same assignment. However, peer prediction mechanisms suffer from having uninformative equilibria. Spotchecking helps the mechanism eliminate the uninformative equilibria or make them less attractive. In my research, I plan to investigate peer grading systems in which the true grades can be partially available by performing spotchecking. In particular, I plan to find the minimum required expected spotchecking, which provides enough incentives for the students to be truthful (i.e., put in the effort and report their evaluation to the system truthfully).