QA Engineer (LLM Evaluation/Adversarial Prompting)
I utilized adversarial prompting techniques in freelance QA roles to evaluate LLM (Large Language Models) on bias, safety, and robustness. These activities focused on large-scale analysis of LLM outputs for global AI-driven projects. My tasks contributed critical feedback to improve AI language model performance and alignment. • Designed adversarial prompts for AI model assessment • Systematically rated LLM responses for compliance and fairness • Reported on bias and edge case handling to stakeholders • Collaborated closely with international AI development teams