AI/ML Evaluator — Content & Output Review | Revelo AI Training Platform
As an AI Output Evaluator for Revelo AI Training Platform, I reviewed and rated AI-generated problem statements and solutions. My work ensured outputs were correct, clear, and logically consistent through structured annotation workflows. I provided detailed feedback and contributed to formulaic evaluation criteria that improved model performance. • Evaluated and labeled LLM outputs for correctness, clarity, and bias. • Provided structured feedback aimed at enhancing data quality for RLHF pipelines. • Participated in the development of guideline-driven annotation standards. • Consistently produced reproducible documentation for continuous improvement.