AI Evaluation Freelancer
As an AI Evaluation Freelancer at DataAnnotation, I evaluated and ranked outputs from multiple large language models to ensure response quality. I conducted independent fact-verification research, assessing factual grounding and accuracy across diverse subjects. I also annotated and labeled datasets, contributing to RLHF pipelines for LLM fine-tuning. • Conducted dataset labeling and annotation for language model training • Performed output ranking and accuracy evaluation for AI responses • Supported reinforcement learning from human feedback pipelines • Researched facts and ensured data quality for alignment with source material