Freelance Data Annotator
As a Freelance Data Annotator, I evaluate and compare responses from multiple AI models to assess reasoning quality, accuracy, and logical consistency. I review and score chain-of-thought outputs using structured evaluation rubrics and provide feedback to improve model reasoning across multi-step tasks. My work involves rigorous annotation practices and rubric-driven evaluation to ensure high-quality data for AI systems. • Compared and rated outputs of LLMs for logical consistency and accuracy. • Assessed multi-step reasoning and chain-of-thought responses using benchmarks. • Applied structured rubrics for data QA and feedback reporting. • Focused on improving model performance and reducing label noise.