AI Text Evaluation
Completed AI training and text evaluation tasks focused on reviewing written responses against project instructions and quality guidelines. The tasks included checking whether responses followed the prompt, identifying unsupported claims, evaluating accuracy, rewriting responses in a more natural human tone, and explaining why an output passed or failed. I also worked on rubric-based evaluation tasks, where I created clear assessment criteria, assigned quality expectations, and judged model outputs using structured guidelines. This experience strengthened my skills in instruction-following, critical reading, error detection, response comparison, content quality review, and clear written feedback for AI training workflows.