Ai Trainer
Worked on multiple AI training and evaluation projects involving text, STEM, coding, software engineering, and multimodal data. Responsibilities included comparing AI-generated responses, labeling and categorizing outputs, validating factual accuracy, checking instruction-following, rating response quality, drafting realistic prompts, and reviewing technical artifacts such as pull requests and code-related tasks. The work ranged from text-only evaluation to image-based and mixed-modality tasks, depending on the project. Project volume varied by assignment, with work completed under task-based limits, timed sessions, and project-specific review workflows. Quality standards required close adherence to detailed rubrics covering correctness, relevance, clarity, completeness, safety, and consistency. I was expected to maintain high annotation accuracy, follow project guidelines precisely, provide well-justified evaluations, and identify edge cases or failure modes in model behavior. Across these projects, I developed strong skills in quality assurance, structured judgment, technical evaluation, and maintaining consistency across large sets of labeling and response-review tasks.