Independent AI Data Labeler & Evaluator
Independently performed hands-on AI data labeling tasks, focusing on text classification, content annotation, and quality scoring. Evaluated AI-generated responses for accuracy, clarity, and reasoning quality using structured guidelines. Regularly compared multiple AI outputs, selected the most logical responses, and refined prompts to enhance model performance. • Completed projects simulating Outlier, Scale AI, and Data Annotation company tasks • Practiced prompt writing, prompt refinement, and error-detection • Utilized tools such as ChatGPT, Copilot, and Claude for daily annotation work • Developed familiarity with structured judgment and evaluation guidelines.