Data evaluation and rating in outlier and clickworker websites
In this project, I was responsible for data evaluation and rating on the Outlier and Clickworker platforms, with a focus on text-based tasks in the Egyptian Arabic language. The scope of work included assessing AI-generated outputs, rating their accuracy, coherence, and cultural relevance, as well as performing text summarization, generation, and classification tasks. Additionally, I contributed to evaluation projects that required fine-grained judgment of grammar, fluency, and overall quality. The project size was large-scale, involving hundreds of tasks per week, which required consistency, attention to detail, and adherence to strict guidelines. To ensure quality, I followed platform-specific instructions, double-checked outputs for linguistic accuracy, and applied standardized rating metrics. This process helped enhance the training datasets and improve model performance across multiple use cases. Through this experience, I strengthened my expertise in evaluation, rating, and tex