Technical QA Tester & AI Code Evaluator
Evaluated and improved AI-generated code snippets as part of LLM model development workflows. Applied Python debugging and optimization to enhance the accuracy and efficiency of code for training datasets. Provided detailed bug reports and code corrections to support high-quality LLM data annotation and evaluation efforts. • Ensured systematic exploratory testing tailored for AI code reviews • Optimized scripts specifically for LLM training datasets • Delivered reproducible documentation on system and code failures • Collaborated with engineering teams to resolve complex issues