AI Data Annotator / Evaluator
As an AI Data Annotator and Evaluator on platforms like Luel and Outlier, I performed a range of data labeling and AI evaluation tasks. My work included evaluating text prompts, grading responses, and labeling AI-generated content across multiple modalities. I also provided structured feedback to improve the quality of AI training data and model performance. • Labeled text and voice outputs generated by AI models for correctness and relevance. • Conducted prompt evaluation and response grading with strict guideline adherence. • Tested speech recognition and usability of AI-driven voice systems. • Generated detailed reports and feedback to enhance system quality control.