Frontend Tutor — Data Annotation/AI Output Evaluation Tasks
As a Frontend Tutor, developed and implemented clear annotation guidelines and evaluation rubrics for web development student projects. Assessed and provided feedback on code submissions, closely mirroring reinforcement learning from human feedback (RLHF) and AI output evaluation tasks. Designed structured instructions for reviewing and rating programming outputs to ensure high-quality, consistent assessments. • Authored precise rubric-based instructions to guide annotation and quality reviews. • Evaluated student code with a focus on clarity, correctness, and best-practice adherence. • Identified and categorized frequent error patterns, simulating AI model failure detection. • Communicated technical judgments and rationale tailored for non-expert audiences.