LLM Model Trainer
Worked as an LLM Model Trainer focusing on improving the performance, reliability, and accuracy of large language models. Contributed to AI training projects by evaluating and refining model outputs across real-world coding tasks using JavaScript, TypeScript, and Python. Responsibilities included analyzing AI-generated code, identifying bugs and inefficiencies, writing high-quality test cases, and providing structured feedback to enhance model behavior. Built and tested full-stack and backend solutions to benchmark model performance, and iteratively improved prompts to achieve more precise and optimized outputs. Collaborated with distributed teams to ensure consistency in evaluation standards and contributed to the continuous improvement of AI systems in fast-paced, production-like environments.