AI Research & Content Evaluator, and Annotator — Freelance / Independent, Remote
As an AI Research & Content Evaluator and Annotator, I ranked and assessed the outputs of generative AI models across multiple modalities. My work involved large-scale annotation to evaluate model performance, quality, and alignment with prompts. I contributed to preference-based reinforcement learning and provided feedback to improve generative processes. • Ranked and rated hundreds of AI outputs (images, audio, video) daily. • Used comparative judgment methods for precise and systematic evaluation. • Ensured annotation accuracy and efficiency as part of model training workflows. • Collaborated with cross-functional AI teams to identify quality trends and optimize outputs.