AI Search & Content Evaluation Specialist
Worked on AI training and evaluation projects through RWS involving multiple data annotation and review tasks. Responsibilities included drawing bounding boxes for object detection tasks, reviewing and validating code outputs for accuracy and logic, evaluating and answering domain-specific questions, and assessing or generating high-quality text summarizations. Followed detailed annotation and evaluation guidelines to ensure consistency, accuracy, and high inter-rater agreement. Performed quality checks, identified edge cases, and provided structured feedback to improve model performance. Maintained productivity targets while delivering precise and unbiased annotations in a remote work environment.