LLM Trainer
As an LLM Trainer at Turing, I improved large language models by creating and evaluating high-quality code datasets for software development. I was responsible for writing, reviewing, and annotating Python code, designing tasks, and validating model outputs for accuracy and efficiency. My work included data annotation, weakness identification, and targeted feedback to enhance real-world AI performance. • Evaluated model-generated code for best practices and efficiency. • Annotated and validated code datasets to ensure labeling quality. • Identified gaps in model capabilities and provided corrective feedback. • Collaborated with AI teams to improve model performance.