AI Workflow & Evaluation Contributor
Reviewed and evaluated AI-generated conversational flows and structured outputs for clarity, logic, and usability. Consistently applied detailed rubrics and guidelines to test AI-assisted workflows and identify inaccuracies, inconsistencies, and user intent mismatches. Monitored and refined AI outputs by comparing responses and providing feedback for targeted improvement. • Evaluated outputs for accuracy, clarity, and relevance. • Applied structured guidelines for rubric-based scoring. • Tested prompts, response comparisons, and AI refinement. • Used LLM interfaces and digital workflow tools for remote evaluation.