Freelance AI Data Annotator & Evaluator
As a freelance AI data annotator and evaluator, I provided high-accuracy labeling and ranking for large-scale datasets to support supervised learning pipelines. I evaluated AI-generated responses for accuracy, relevance, safety, and instruction-following by applying detailed rubrics. I ensured consistent output quality and contributed to improving annotation standards through structured feedback and issue flagging. • Annotated and labeled text, audio, and image data to support multiple AI development workflows. • Ranked and rated AI responses using rubrics, documenting rationale for each judgment. • Maintained high throughput and adherence to strict SLAs and quotas in remote, global teams. • Leveraged platforms such as Remotasks, Appen, Toloka, Outlier, and uTest for annotation and evaluation.