LLM Python Engineer (AI Training and Model Evaluation)
Built workflows to train and evaluate machine learning models by supplying corrections when the model produced errors. Designed and implemented model-break scenarios to test and improve model robustness through bug induction cases. Coordinated the review and validation of trainer work against guidelines to ensure only high-quality, accurate outputs were sent to the client. • Trained and evaluated LLMs using custom workflows. • Provided correct code solutions to aid model learning and improvement. • Designed edge-case scenarios for robustness checks. • Validated pipeline and data quality using automated tests.