Data Annotation Worker - Appen Crowdsource and UHRS
Worked on data annotation projects using Appen Crowdsource and UHRS platforms. Labeled and evaluated textual data for AI training and improvement tasks. Engaged in reviewing and rating responses for natural language processing models. • Used established guidelines to ensure data quality and consistency. • Performed relevance and accuracy checks on various text inputs. • Contributed to dataset labeling for search and conversational AI use. • Applied rating scales and categorization for machine learning refinement.