AI Output Evaluation& Text Annotation
Contributed to AI training and evaluation workflows involving reviewing model responses, applying detailed guidelines, and providing structured feedback to improve output quality. Tasks included assessing accuracy, clarity, and instruction adherence across text-based prompts, identifying edge cases, and maintaining consistency across assignments. This work required careful attention to detail, clear reasoning, and adapting to evolving labeling standards.