Quality Assurance Evaluator
I reviewed and evaluated AI-generated text, responses, and media outputs against strict quality rubrics and guidelines. I checked for factual accuracy, coherence, bias, formatting, and safety issues for large language models and documented all concerns in detail. My feedback contributed to improving the performance and reliability of AI systems by providing structured, actionable feedback to technical teams. • Maintained evaluation accuracy above 90% during regular audits. • Used platform tools for categorizing and submitting issues. • Identified and reported hallucinations, bias, and safety risks. • Categorized feedback by severity to facilitate resolution.