Prompt–Rubric Evaluator (Contract Project)
This experience involved evaluating AI model responses for prompt–rubric compliance and accuracy. Quality assurance was ensured by comparing model outputs to defined rubric criteria and documenting findings. The work required precise attention to detail and a strong analytical mindset. • Reviewed diverse AI-generated text responses across multiple scenarios. • Rated compliance based on predefined rubric standards. • Documented inconsistencies and suggested improvements. • Demonstrated adaptability in dynamic evaluation tasks.