LLM Code Evaluation (RLHF)
Performed technical evaluation on 1,000+ AI-generated code snippets (Next.js/TypeScript). Rated responses for syntax accuracy, security vulnerabilities (RLS), and logic errors. Rewrote hallucinated code to create high-quality training datasets for a RAG-based system. Adhered to strict ISO-standard quality measures for code safety and efficiency.