AI Writing Evaluator
Evaluated and refined Large Language Model (LLM) outputs for accuracy, safety, and helpfulness. Performed in-depth linguistic analysis and fact-checking of AI-generated text to ensure high standards. Delivered structured feedback using markdown and proprietary annotation tools to improve model instruction-following capabilities. • Focused on spotting and correcting logical inconsistencies in AI responses. • Used prompt engineering and Reinforcement Learning from Human Feedback (RLHF) methods. • Worked with a global network of developers on continuous model improvement. • Produced high volume, high quality training data for LLM enhancement.