AI Training Contributor (Annotator)
As an AI Training Contributor (Annotator) at Handshake AI and Outlier AI, I evaluated AI-generated text responses for accuracy, tone, and relevance. I provided structured feedback to improve prompt engineering and increase LLM performance. My responsibilities included justifying decisions and contributing to refining safety and behavioral filters within large language models. • Rated AI-generated responses for quality and factual accuracy. • Delivered detailed suggestions to enhance model comprehension of prompts. • Logged rationale for judgment calls to inform ethical and safety guidelines. • Worked with diverse annotation and feedback tools to bolster LLM alignment.