AI data annotation and model evaluation
The project focused on supporting AI model development through accurate data labelling and response evaluation. Specific data labelling tasks performed include, annotated and categorized text datasets, reviewed AI-generated responses, identified incorrect or biased outputs. Project size is a worked on multiple annotation tasks involving hundreds of data samples,completed labelling and evaluation assignments. Quality measures put into place included strict annotation guidelines, maintained high attention to detail, ensured that annotations aligned with quality standards required for AI model training.