Data Annotation
Evaluating AI prompts and their generated responses to ensure they meet defined quality and safety standards. The task requires carefully reading the prompt, analyzing the AI’s response, and assessing whether it is relevant, accurate, clear, complete, and aligned with project guidelines. Reviewers check for issues such as hallucinations, logical errors, bias, harmful or unsafe content, and policy violations. They then assign ratings based on structured scoring criteria and provide concise justifications for their evaluations. The overall goal is to improve AI model performance by ensuring outputs are helpful, coherent, factually correct, and compliant with annotation guidelines.