AI Trainer and Data Annotator (Claude API Project Work)
As an AI Trainer and Data Annotator, I designed, tested, and evaluated prompts and responses for language models via the Anthropic Claude API. My responsibilities included output evaluation, prompt iteration, and structured feedback to enhance AI output quality. The role demanded a strong annotation mindset and adherence to detailed guidelines. • Conducted prompt engineering and iterative improvements for LLM output accuracy • Evaluated, rated, and compared AI-generated text for accuracy, tone, and helpfulness • Synthesized complex text data and flagged inconsistencies or errors according to guidelines • Applied structured, documented annotation and reporting practices leveraging internal/proprietary tools