AI Data Annotator & Quality Assurance Reviewer
Contributor to multi-domain human-in-the-loop AI projects focused on data annotation, LLM output evaluation, and quality assurance. Responsibilities include reviewing AI-generated text, images, and videos for accuracy, coherence, safety, and guideline compliance; performing QA audits of other contributors’ work; identifying systematic issues and edge cases; providing structured feedback; and validating datasets prior to model training. Participated in projects involving content freshness checks, product and brand evaluation, dialogue safety, and agent behavior testing.