Software Developer & AI Evaluation Assistant
Reviewed and analyzed AI-generated code for correctness, performance, and readability in freelance and academic project settings. Provided structured feedback and identified best-practice deviations for AI model enhancement. Designed structured prompts and scenarios to evaluate coding, reasoning, and debugging abilities in AI systems. • Evaluated AI code solutions for logic, efficiency, and edge case handling. • Developed prompt strategies to test model behavior and solution accuracy. • Delivered actionable feedback to address identified issues in AI-generated code. • Applied quality assessment criteria to ensure high-standard model output.