Text Classification & AI Output Evaluation (Practice / Project-Based)
Worked on entry-level, project-based text labeling and AI output evaluation tasks to build familiarity with data annotation workflows. Responsibilities included classifying text responses, evaluating AI-generated outputs for relevance, accuracy, and clarity, and identifying low-quality or incorrect responses based on predefined criteria. The work emphasized consistent judgment, guideline adherence, and quality-focused review rather than model development.