AI Response Annotator & Evaluator — Project Diamond (Handshake AI)
Worked on annotating and evaluating AI-generated responses to improve model accuracy and reliability. Maintained high consistency and adherence to detailed project guidelines throughout projects. Contributed iterative feedback toward overall system performance improvement. • Annotated AI-generated text outputs for various quality criteria. • Assessed the relevance, correctness, and formatting of possible model replies. • Followed structured policies for annotation tasks and team review. • Provided written feedback and scoring to inform retraining and evaluation cycles.