Data Annotator
As a Data Annotator at Handshake Ai, I was responsible for testing prompts and rubrics to evaluate and improve AI models. My work involved interacting directly with text data to provide feedback and ensure model accuracy. This role required attention to detail and analytical thinking to effectively challenge and enhance AI performance. • Conducted rigorous prompt and rubric testing to assess model capabilities. • Provided feedback for model improvement based on annotation outcomes. • Worked primarily with text data related to AI training and evaluation. • Utilized internal or proprietary AI annotation tools to complete labeling tasks.