LLM Evaluation and Text Annotation Project
Contributed to a multilingual LLM evaluation and text annotation project focused on improving AI understanding and response quality in English and Swahili. Tasks included entity recognition, content classification, text summarization, and rating AI-generated outputs for coherence, accuracy, and tone. Ensured consistent labeling quality across large datasets through detailed review and adherence to strict annotation guidelines. Collaborated with global teams to refine evaluation protocols and support fine-tuning of large-scale models.