AI Tool Evaluator & Tester
In this role, I evaluated and tested various AI tools for quality and consistency of their text-based outputs. My responsibilities included providing structured feedback on the tone, accuracy, and potential bias of responses generated by tools like ChatGPT, Claude, and Kora AI. I developed expertise in critically assessing AI for errors and inconsistencies—skills crucial for AI training and evaluation tasks. • Conducted hands-on evaluations of multiple conversational AI platforms • Delivered detailed assessments highlighting strengths and issues in AI performance • Used platforms such as Outlier.ai and Mindrift for structured AI evaluation • Supported improvement of AI models through comprehensive feedback