AI Annotator (Coding)
One of the key projects I contributed to at Data Annotation Tech was the Poe Bird project, which involved evaluating and comparing AI-generated responses based on a structured set of criteria such as accuracy, completeness, clarity, and tone. I assessed outputs for both simple and complex user prompts, rated responses against detailed rubrics, and provided written justifications for each evaluation. This project demanded sharp critical thinking, strong language skills, and the ability to identify subtle differences in reasoning quality and instruction-following. My work helped refine large language model behavior by offering consistent, high-quality feedback on output performance.