AI Response Evaluation & RLHF Annotation Project
Conducted structured evaluation of AI-generated text outputs using rubric-based grading frameworks. Assessed responses for logical consistency, factual accuracy, instruction adherence, bias detection, and hallucination risks.