AI Trainer
As an AI Trainer, I designed and evaluated multi-turn conversational scenarios to simulate real-world interactions between users and AI assistants. I labeled and assessed large language model (LLM) outputs for reasoning, coherence, and correctness using structured JSON-based annotation guidelines. I provided detailed feedback and error analysis to enhance AI models’ performance and supervised strict adherence to evaluation standards and annotation quality. • Designed realistic multi-turn conversations and simulated task orchestration workflows • Applied function-calling and tool-based workflow logic in LLM evaluations • Utilized remote collaboration to maintain quality and throughput benchmarks • Worked extensively with JSON data structures and tool-based labeling platforms