AI Content Evaluator & Data Annotator
Evaluated AI-generated text, chatbot conversations, and language model outputs for accuracy, coherence, and adherence to instructions. Provided structured ranking and rating feedback to support RLHF pipelines and model improvement. Annotated and labelled datasets including text, image, and conversational data for AI training. • Reviewed and scored content across multiple domains. • Applied detailed rubrics and style guides to ensure consistency. • Wrote and refined prompts to test AI model capability. • Supported RLHF, prompt engineering, and fine-tuning workflows.