AI Data Annotation & Content Evaluation
This position involved evaluating and labeling AI-generated text data for quality and accuracy using structured scoring systems. Tasks were centered on identifying inconsistencies, rating outputs, and improving dataset reliability for AI model training. Annotation was performed using spreadsheet tools such as Excel and Google Sheets. • Reviewed and labeled AI-generated responses and prompts for training datasets. • Used standardized evaluation guidelines for consistent annotation. • Identified factual errors and ambiguous content in text outputs. • Applied structured rating systems to assess and improve overall data quality.