AI Trainer & Code Evaluation Specialist
Performed evaluation of LLM and AI-generated outputs for correctness, clarity, and instruction adherence. Assessed model prompts and responses as part of fine-tuning and human feedback (RLHF) data pipelines. Provided structured annotation, prompt engineering, and code evaluation within AI training platforms. • Evaluated response accuracy against rubrics • Annotated outputs for quality improvement • Applied CS fundamentals to assess code validity • Supplied feedback supporting RLHF and model fine-tuning