AI Content Evaluation & Quality Annotation
Reviewed and evaluated AI-generated text outputs using rubric-based quality criteria, including clarity, accuracy, tone, safety, and instruction-following. Provided structured written feedback to improve response quality, readability, and usefulness. Identified recurring error patterns and inconsistencies across large volumes of content while adhering to defined quality guidelines and turnaround requirements. Work involved independent evaluation of LLM-generated responses and detailed feedback aligned with platform-specific standards.