AI Training & Evaluation Annotator—Project Diamond (Handshake AI)
Contributed to AI and LLM data annotation and evaluation tasks as part of Project Diamond for Handshake AI. Worked to ensure AI-generated responses were accurate, high-quality, and adhered to established guidelines. Focused on maintaining consistency and improving model outputs through detailed human feedback. • Annotated AI responses for accuracy and relevance • Evaluated outputs for guideline compliance • Provided structured feedback on model performance • Optimized training data quality for downstream tasks