Academic Integrity & Proctoring AI Training Data
Contributed image data for training AI proctoring systems used to detect suspicious behavior during online exams. Submissions captured a range of realistic test-taking scenarios and behavioral states, providing ground-truth examples that train models to distinguish between normal exam-taking activity and patterns flagged as potentially suspicious (eye movement, posture shifts, off-screen glances, environmental anomalies). The work supports the development of academic integrity tools that need diverse, naturalistic human data to reduce false positives and improve detection accuracy across different test-takers, lighting conditions, and devices. Quality measures included adherence to capture guidelines around framing, lighting, duration, and behavioral specifications for each labeled scenario.