AI Model Trainer & Evaluator
As an AI Model Trainer & Evaluator at Alignerr, I was responsible for designing and testing prompts to assess AI model responses. My tasks focused on evaluating reasoning and factual accuracy of generative AI outputs. I worked collaboratively in a rapid, remote startup environment to support improvements in model capabilities. • Evaluated and rated generative text outputs from language models. • Crafted, curated, and tested prompt-response sets targeting reasoning and factual checks. • Assisted in identifying errors or weaknesses in AI outputs to inform iterative model enhancement. • Contributed to user experience improvements through AI content review and quality feedback.