Independent AI Data Trainer & Evaluator
As an independent AI Data Trainer & Evaluator, I evaluated AI-generated text outputs from multiple large language models, focusing on factual accuracy and adherence to quality rubrics. I crafted and tested diverse prompts and systematically annotated, classified, and ranked AI responses for quality, relevance, and compliance. My work included structured RLHF feedback to improve model alignment and consistent documentation of all quality processes. • Assessed logical coherence, instruction-following, tone, and formatting in AI model outputs. • Annotated, classified, and labeled AI-generated content for compliance and safety. • Compared, ranked, and documented AI responses using defined criteria. • Generated detailed task logs to ensure consistency across evaluation sessions.