AI Data Annotator – Project Aether (Freelance)
As an AI Data Annotator for Project Aether, I evaluated AI-generated responses using structured guidelines to assess quality. I ranked outputs by relevance and accuracy while providing clear justifications for scoring decisions. Consistency was maintained across large-scale annotation tasks in a remote setting. • Conducted quality assurance checks for logic, accuracy, and relevance. • Offered structured feedback on AI responses to enhance model performance. • Utilized remote AI annotation platforms to complete diverse tasks efficiently. • Specialized in instruction adherence and comparative ranking methodologies.