AI Data Annotator / QA Evaluator
Served as an AI Data Annotator and QA Evaluator at Outlier AI, focusing on text-based annotation and evaluation for AI/LLM models. Tasks included response rating, summarization evaluation, and intent labeling with defined guidelines. Structured formats such as JSON and YAML were utilized to define test cases and validation rules. Worked collaboratively to improve model performance by providing feedback on hallucinations and inconsistencies. • Annotated and reviewed AI-generated outputs and agent logs. • Created evaluation scenarios and documented edge cases. • Designed and applied scoring rubrics. • Flagged issues related to logical inconsistencies and formatting.