AI Output Reviewer & Annotator (General/Multiple Projects)
Reviewed and rated the accuracy, appropriateness, and quality of AI output scripts and written materials. Participated in evaluating and ranking model responses to improve AI system outputs and training data. Focused on annotation guideline compliance and fact-checking for mission-critical content. • Compared model outputs for inconsistencies and factual errors. • Labeled content for further AI training and evaluation. • Used a consistent standard for QA and annotation across platforms. • Supported model improvement through structured output reviews.