AI Generalist & Prompt Evaluator (Freelance/Platform-Based)
As an AI Generalist and Prompt Evaluator, I systematically assess whether AI outputs are accurate, logical, safe, and well-structured. My work involves producing high-quality written evaluations that follow detailed instructions and rubrics for AI training and output refinement. Leveraging strong critical thinking and attention to detail, I provide clear, structured feedback on AI-generated responses and participate in RLHF-style feedback tasks on dedicated platforms. • Evaluated AI output responses using detailed criteria and rubrics for quality, accuracy, and safety. • Generated ranking and written assessments for prompt-response pairs to inform RLHF and SFT development. • Used technical documentation and instruction-following to ensure consistent and objective scoring. • Performed structured written feedback for prompt engineering and output calibration.