AI Training Specialist: Evaluation, Prompt Engineering, and Annotation
Reviewed and evaluated AI-generated code and technical content for accuracy and quality. Assessed coding tasks, prompt outputs, and provided calibrated feedback on AI model performance. Performed structured data annotation and quality assessment for code-related tasks using established criteria. • Conducted code reviews in Python, JavaScript, SQL, and Apex for correctness, efficiency, and style. • Designed and refined prompts aimed at improving AI output quality and consistency. • Evaluated AI-generated technical documentation for accuracy, completeness, and clarity. • Used annotation platforms including Alignerr and Labelbox to provide ratings and feedback.