AI Data Annotation Contributor (Contract)
As an AI Data Annotation Contributor for Handshake AI, I reviewed and evaluated AI-generated text responses using detailed rubrics. My work focused on assessing logical coherence, instruction-following, and factual accuracy to ensure high-quality AI outputs. I documented all evaluation decisions clearly for use in model training and continual improvement. • Applied structured guidelines consistently across diverse annotation tasks. • Compared multiple AI outputs to select best responses based on given criteria. • Identified errors, inconsistencies, and missing context in model responses. • Maintained high standards of clarity and accuracy throughout the annotation process.