Evaluator/Trainer for AI Training Platforms
Served as an evaluator and trainer for AI training platforms focused on large language models. Conducted assessments of AI outputs for quality and accuracy in language generation tasks. Provided feedback and rating on model-generated content to improve model performance and relevance. • Evaluated responses produced by AI models for correctness and coherence. • Participated in iterative training cycles to refine AI outputs. • Used proprietary and commercial AI tools for assessment and training. • Collaborated with platform teams to enhance evaluation frameworks.