AI Response Evaluator and Data Annotator
I participated in remote freelance AI model training projects with Remotasks, Outlier, and OneForma. My responsibilities included evaluating and annotating AI responses to prompts across several projects, such as Lighthouse and Bulba. I provided structured feedback and quality analysis to improve model output consistency and accuracy. • Assessed and rated AI-generated text for instruction following, factual accuracy, completeness, tone, and safety. • Wrote concise evaluation comments describing response quality and suggested improvements. • Performed structured annotation and prompt-response evaluation on English and Arabic content. • Utilized annotation and rating guidelines to ensure high-quality training data output.