LLM Response Evaluation & RLHF Annotation Project
Worked on large-scale AI training and evaluation projects focused on improving Large Language Model (LLM) performance. The project involved reviewing and rating AI-generated responses using structured evaluation rubrics.