AI Coding Evaluator and AI Training Data Validator
This role involved evaluating AI-generated Python and JavaScript code for correctness and quality. Responsibilities included reviewing, debugging, and ranking multiple outputs, as well as providing performance feedback to guide AI model improvements. Additional duties centered on verifying and validating machine learning datasets, prompt engineering outputs, and reinforcement learning from human feedback. • Evaluated and ranked AI-generated computer code for accuracy and logical soundness • Debugged algorithms and identified performance or syntax issues in output • Validated datasets for machine learning and NLP model training • Performed RLHF tasks, including structured feedback for model improvement