AI Data Annotation & Content Evaluation Tasks
Performed evaluation and text analysis tasks to support AI training initiatives. Analyzed written content to assess language, tone, and user intent according to detailed guidelines. Provided structured feedback to improve AI system performance and quality. • Evaluated prompt-response content for relevance and clarity • Rated user-generated text samples for context and appropriateness • Used established rating systems to score AI outputs • Followed strict quality and consistency standards across all evaluations