Data Annotator
As a Data Annotator at HUGO Technologies, I performed large-scale evaluation and labeling of AI-generated text and image data for various AI and machine learning projects. I utilized rubric-based scoring to assess prompt–response quality, handling bias identification, hallucination spotting, and reporting structured feedback. My contributions included quality assurance, dataset reliability improvement, and collaboration with QA teams for guideline refinement. • Labeled and annotated over 10,000+ data points for LLM, GCE, PPC, and MMLLM projects. • Used annotation tools including SRT and Parimango to meet or exceed 98% quality accuracy. • Conducted evaluations to detect and flag ambiguous or low-quality data, reducing model bias. • Supported refinement of annotation guidelines, yielding greater project efficiency.