Remote Contributor - Data Annotation & AI Response Evaluation
I completed language-focused data annotation and labeling tasks as a remote contributor at Crowdgen. My responsibilities included evaluating written content for grammar, clarity, tone, and relevance in alignment with project guidelines. I also reviewed AI-generated responses to improve their quality and usefulness. • Labeled and categorized written English text data according to guidelines. • Evaluated and rated AI-generated textual responses for quality and accuracy. • Ensured consistency and attention to detail in language annotation projects. • Managed deadlines and productivity in a remote work environment.