multimodal Data Annotation & Quality Evaluation Invisible Technologies
Experienced in large-scale multimodal data annotation projects supporting the training and evaluation of Large Language Models (LLMs) and AI systems. Performed text classification, prompt response quality evaluation, sentiment labeling, entity tagging, image annotation, and multimodal alignment tasks according to detailed client guidelines. Annotated and reviewed thousands of data samples, achieving high accuracy and consistency. Applied quality control procedures, including peer review, gold-standard checks, and rubric-based scoring, to ensure dataset reliability. Worked with QA teams to resolve edge cases and improve labeling guidelines. Consistently met productivity and accuracy benchmarks in fast-paced production environments.