Annotator
I have contributed to multiple AI data annotation and content evaluation projects through Welocalize and OneForma, supporting the development and improvement of machine learning models. At Welocalize (Search Quality Rater), I evaluated large-scale search engine results using the RaterHub platform. The scope of the project involved assessing relevance, usefulness, and user intent alignment across various types of content. Tasks included rating web pages, identifying content quality issues, and applying detailed guidelines to ensure consistency and accuracy. The project required handling high volumes of data while maintaining strict quality standards and calibration alignment. At OneForma, I worked on Project Lighthouse and Project CherryOpal using Apple Connect tools. These projects involved data annotation, validation, and content evaluation across different media types. My responsibilities included labeling and reviewing content based on structured criteria, performing quality checks, and ensuring that all annotations adhered to evolving project guidelines. The datasets were large-scale and required consistent judgment, attention to detail, and timely delivery. Across these projects, I consistently followed quality assurance measures such as guideline compliance, inter-rater consistency, accuracy validation, and periodic calibration reviews. I also ensured that all outputs met project-specific standards and deadlines.