AI Data Annotation & Prompt Evaluation (Freelance/Remote)
Participated in AI training tasks related to academic and scientific content, focusing on annotation and prompt evaluation. Applied subject-matter expertise to evaluate, correct, and rate AI-generated outputs for academic quality and factual accuracy. Contributed to refining AI models in the research and scientific writing domain by providing structured feedback on annotations and prompt performance. • Engaged in data annotation and evaluation of scientific texts and academic prompts • Used standard evaluation guidelines for rating AI outputs in the context of research writing • Collaborated remotely, ensuring adherence to structured task protocols • Provided feedback to enhance large language model (LLM) performance