AI Data Annotator (LLM Evaluation & Prompt Engineering)
As an AI Data Annotator specializing in LLMs, I evaluated and graded AI-generated responses for quality and accuracy. I regularly designed and wrote high-quality prompts, refined textual data, and ensured adherence to project guidelines. My expertise also included detecting hallucinations and providing actionable editorial feedback to improve model outputs. • Graded AI outputs using strict criteria like factuality, safety, and logic. • Designed challenging prompts for LLM testing and training across domains. • Reviewed, edited, and formatted large volumes of text data for clarity and consistency. • Maintained a 98%+ quality assurance score through rigorous standards.