AI Data Operations - Data Annotator
Working on a large-scale autonomous agent training project focused on evaluating multi-step reasoning, tool-use correctness, and JSON-based state transitions. Tasks include annotating agent decision paths, identifying ambiguities, classifying errors, refining task instructions, and defining gold-standard behaviors across diverse scenarios. The project involves thousands of agent runs and multi-layer evaluations, requiring strict adherence to accuracy benchmarks, consistency checks, and multi-review quality controls. Deliverables undergo periodic audits, cross-review validation, and performance scoring to maintain high-quality training data for agentic AI systems.