AI Evaluation Contributor (Freelance)
As an AI Evaluation Contributor, I scored and annotated AI-generated text using detailed rubric-based criteria. My work involved assessing the tone, coherence, and intent of content to support the fine-tuning of large language models. I delivered consistent, high-quality annotations for AI training datasets across remote platforms. • Provided structured evaluations of LLM (large language model) outputs. • Used rubric-based guidelines to ensure fairness and consistency. • Supported the creation of training datasets for model improvement. • Worked with bilingual content (English and Hindi) as required.