AI Language Data Annotation & Evaluation (Multilingual)
Worked on multiple AI training and data labeling projects focused on text-based annotation, linguistic evaluation, and content relevance rating. Tasks included guideline-driven labeling, intent and relevance assessment, quality validation, and multilingual evaluation for AI and LLM training datasets. Consistently met productivity and accuracy targets while following detailed annotation guidelines. Maintained high quality standards across remote, task-based workflows and contributed to improving model performance through precise and consistent annotations.