Multilingual Text Evaluation and Summarization Project
Participated in a large-scale text evaluation project focused on Arabic and English content. Tasks included generating summaries for long texts, evaluating AI-generated responses, and classifying textual data based on quality and relevance. Worked on content moderation tasks to ensure compliance with guidelines and improve AI safety. Contributed to the improvement of multilingual large language models (LLMs) by providing detailed feedback and accurate labels. The project emphasized high-quality, precise annotations with strict adherence to quality standards and guidelines.