LLM Annotator — Anuttacon
As an LLM Annotator, I designed and wrote high-quality prompts to train and evaluate large language models (LLMs) across diverse domains. I evaluated AI-generated responses on accuracy, clarity, and alignment with human intent, providing detailed feedback to support iterative improvements. I also ranked and rated model outputs as part of reinforcement learning from human feedback (RLHF) workflows. • Produced comprehensive prompts and response evaluations for next-generation LLMs • Identified edge cases and failure modes in model-generated outputs • Contributed to ongoing guideline refinement and model safety efforts • Supported RLHF by ranking and providing qualitative feedback on outputs