LLM Annotator
As an LLM Annotator, I labeled diverse datasets for large language model training and evaluation. I assessed AI-generated content for coherence, accuracy, safety, and ethics in alignment with strict project guidelines. I also provided structured feedback to support Reinforcement Learning from Human Feedback (RLHF) and maintained rigorous annotation standards. • Evaluated outputs for summarization, question answering, and code generation. • Collaborated to clarify ambiguous cases and ensured guideline updates. • Documented annotation procedures to support inter-annotator agreement. • Upheld quality benchmarks and project service level agreements.