AI Text Annotator & Response Evaluator
As an AI Text Annotator & Response Evaluator for Alignerr by Labelbox, I evaluated AI-generated text responses for factual accuracy, tone, and safety across multiple projects. I consistently adhered to detailed annotation rubrics while providing calibration feedback and written rationales for complex cases. This work required adapting to evolving guidelines and handling thousands of samples weekly in a remote environment. • Reviewed and rated AI-generated text for helpfulness and compliance with safety standards. • Applied structured rubrics to classify and annotate diverse written content. • Collaborated with remote teams to resolve annotation disagreements and improve consistency. • Helped refine annotation guidelines by contributing constructive feedback and case notes.