AI Content Evaluation & Prompt-Response Specialist
Evaluated and rated AI-generated responses for accuracy, coherence, contextual relevance, tone, and safety compliance. Compared multiple model outputs and selected higher-quality responses based on structured evaluation rubrics. Performed supervised fine-tuning (SFT) tasks by crafting high-quality prompt-response pairs aligned with project guidelines. Improved response clarity, logical structure, and factual reliability while maintaining natural language flow. Applied Reinforcement Learning from Human Feedback (RLHF) principles by identifying subtle quality differences between outputs and providing consistent, guideline-based judgments across diverse content types. Maintained high annotation consistency across large task batches while adhering strictly to formatting, style, and evaluation standards.