Text Annotation, Data Labeling, and AI Output Evaluation Tasks (Academic & Volunteer Contexts)
In various academic and volunteer capacities, I performed text annotation, data labeling, and AI output quality evaluation tasks relevant to training and assessment of AI language models. These tasks emphasized structured feedback, strict adherence to guidelines, and review of written communications for accuracy. My work involved identifying errors, assessing clarity, and providing constructive, systematic critique of outputs. • Applied annotation and evaluation to text-based datasets for research and model review • Used Google Workspace (Docs, Sheets, Forms) for structured data entry and collaborative workflows • Routinely summarized and synthesized complex content for AI or academic use • Prioritized consistency, attention to detail, and adherence to complex rubrics