Generative AI Annotator Analyst
Ongoing GenAI data labeling and evaluation project focused on improving large language model safety and response quality for a major social media and AI ecosystem. I review user prompts and AI-generated outputs and label them according to detailed content risk and quality standards, documenting clear rationales to support consistency and downstream training. Tasks include safety classification, response quality rating, identifying policy violations, and handling complex edge cases with strong attention to nuance, context, and multilingual considerations. Quality measures include strict guideline adherence, calibration alignment, consistency checks, and feedback loops to reduce ambiguity and improve labeling reliability. I also support process improvement by flagging recurring error patterns, helping clarify unclear guideline areas, and mentoring teammates on difficult cases to increase overall annotation accuracy and standardization.