DataAnnotation.tech – AI Training & Evaluation
Worked on text annotation and AI response evaluation projects for large language model training. Tasks included writing and evaluating prompts, generating and reviewing model responses, classifying user intent, and applying detailed rubrics to assess quality, correctness, and safety. Regularly performed preference ranking between multiple model outputs and reviewed other annotators’ work for guideline adherence. Completed tasks involving sensitive and harmful content categories, including offensive language and safety-critical prompts, ensuring outputs met neutrality and policy requirements. Maintained consistency across evolving task specifications and followed strict quality control standards.