Independent Generative AI Evaluator (Self-Directed Projects)
Through ongoing independent projects and self-directed learning, I maintained high-intensity engagement with cutting-edge AI systems. I systematically evaluated AI model performance from a user-experience and safety perspective using analytical criteria refined by a scientific background. My independent reviews focused on detection of potential harm, accuracy issues, and user satisfaction. • Consistent interaction with generative AI products to provide feedback on output quality. • Analytical assessment of AI outputs for potential risks or errors. • Detailed noting of user perception and satisfaction across varied AI contexts. • Ongoing documentation of findings to support continuous model improvement.