AI Text Annotation & Evaluation (LLM Tasks)
Performed text-based AI annotation and evaluation tasks to support large language model improvement. Reviewed AI-generated responses for accuracy, clarity, and instruction adherence; classified text outputs based on defined criteria; and provided structured feedback to identify errors, inconsistencies, and areas for refinement. Followed detailed task guidelines, applied consistent quality standards, and ensured reliable annotations across multiple task iterations.