AI Prompt Evaluator / Data Annotation Specialist
As an AI prompt evaluator, I reviewed outputs from large language models to assess their accuracy, safety, relevance, and clarity. My work included detailed checking for guideline compliance, annotation of errors, and structured feedback on AI-generated responses. I maintained high-quality standards while working independently in remote projects. • Conducted systematic evaluation and grading of AI-generated prompts and responses. • Labelled and annotated text data for accuracy, safety risks, and overall quality. • Used standardized criteria to rate and assess outputs. • Provided structured feedback to improve AI model performance.