Freelancer Overview
I am an experienced AI Evaluation and Data Labeling Specialist with over five years of hands-on work in rubric-based review, LLM evaluation, and multi-modal annotation. My expertise covers prompt and response analysis, RLHF, SFT, classification, and adversarial red-teaming. I specialize in detecting factual inaccuracies, bias, ambiguity, and policy non-compliance while delivering consistent, high-quality outputs that align with evolving guidelines. I have worked across multiple vendor platforms, including Scale AI, Surge AI, DataAnnotation.Tech, Outlier, Appen, TELUS International, and TaskUs, consistently maintaining accuracy rates above 98%.
With a Master’s in Language Technology and a Bachelor’s in English and Linguistics from the University of Alabama, I bring both academic depth and applied experience in annotation frameworks, computational linguistics, and evaluation design. My background in teaching, copyediting, and auditing strengthens my ability to apply detailed rubrics, provide actionable feedback, and ensure cross-team consistency. I excel in high-volume, deadline-driven environments and am passionate about improving model fairness, safety, and overall performance through careful, detail-oriented evaluation.