DataCompute
The project involved evaluating the outputs of AI based on the prompts. The criteria involved quality and adherence to the prompts.
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
No subject matter listed
With over three years of specialized experience in multilingual AI training and evaluation, I've contributed to improving language models through extensive work on platforms like DataAnnotation and Prolific. My expertise spans evaluating LLMs in both Arabic (native) and English (C1), with particular focus on translation quality assessment, cross-cultural content adaptation, and prompt engineering across diverse domains. I've systematically evaluated AI-generated translations between Arabic and English, identifying linguistic nuances and cultural context issues that automated systems often miss. My background in linguistics and education has proven invaluable in developing effective prompting strategies that enhance model performance on specialized topics, from academic content to technical documentation. What distinguishes my approach is the combination of rigorous analytical methodology with deep cultural and linguistic understanding, allowing me to provide nuanced feedback that addresses both technical accuracy and cultural appropriateness in AI-generated content.
The project involved evaluating the outputs of AI based on the prompts. The criteria involved quality and adherence to the prompts.
The project involved checking the translation of AI and provide correction/enhancement where necessary and appropriate.
Bachelor Of Arts, English Language And Literature
Master Of Education, Teaching English To Speakers Of Other Languages (TESOL)
Content Evaluator & Multilingual Text Specialist
Text Transformation & Content Accessibility Specialist