AI Response Evaluator & Prompt Engineer (Freelance/Personal Projects)
Worked with AI tools such as GPT, Claude, and Blackbox AI to evaluate responses and compare outputs. Conducted prompt engineering tasks to assess LLM-generated content for quality and relevance. Built knowledge of basic data annotation processes in support of AI projects. • Assisted in quality evaluation and prompt-based testing of AI model responses. • Performed comparisons between different AI-generated outputs to determine best responses. • Applied attention to detail in content review and response assessment tasks. • Supported content evaluation work with analytical thinking and English writing skills.