For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Hasan Sakr

Hasan Sakr

Ai Reviewer

EGYPT flag
Mansoura, Egypt
$20.00/hrExpertScale AIMercorData Annotation Tech

Key Skills

Software

Scale AIScale AI
MercorMercor
Data Annotation TechData Annotation Tech

Top Subject Matter

No subject matter listed

Top Data Types

Computer Code ProgrammingComputer Code Programming
DocumentDocument
TextText
VideoVideo

Top Label Types

Evaluation Rating
Fine Tuning

Freelancer Overview

I am an experienced AI evaluator and data annotation specialist with a strong background in reviewing and enhancing AI-generated solutions in mathematics and Python programming. My work has involved applying structured rubrics to assess accuracy, reasoning depth, and conceptual understanding, as well as authoring optimized solutions and comprehensive feedback to drive iterative AI model improvement. I have collaborated with remote teams to evaluate hundreds of advanced cases monthly, introduced workflow documentation for consistent KPI tracking, and developed detailed reports to identify recurring issues and improve model reliability. My technical expertise is complemented by a foundation in physics and hands-on experience with data-driven projects, making me adept at ensuring high-quality training data for AI systems across complex domains.

ExpertEnglish

Labeling Experience

Scale AI

Math / Python & AI Code Evaluator

Scale AIComputer Code ProgrammingEvaluation Rating
Evaluated AI-generated mathematics solutions and Python code for correctness, clarity, and computational efficiency. Created reference solutions and feedback templates to enhance model performance and acceptance rates. Provided regular reporting to inform training and prompt engineering teams on error classes and improvements. • Designed workflow documentation and KPI dashboards for scalable assessment. • Generated annotated test cases and example solutions for engineering reproducibility. • Delivered rubric-driven, consistent assessments for advanced cases per month. • Collaborated with cross-functional teams to refine evaluation approaches.

Evaluated AI-generated mathematics solutions and Python code for correctness, clarity, and computational efficiency. Created reference solutions and feedback templates to enhance model performance and acceptance rates. Provided regular reporting to inform training and prompt engineering teams on error classes and improvements. • Designed workflow documentation and KPI dashboards for scalable assessment. • Generated annotated test cases and example solutions for engineering reproducibility. • Delivered rubric-driven, consistent assessments for advanced cases per month. • Collaborated with cross-functional teams to refine evaluation approaches.

2025
Mercor

Python & AI Code Evaluator

MercorComputer Code ProgrammingEvaluation Rating
Reviewed AI-generated Python code for correctness, performance, and clarity. Produced executive-friendly assessment reports detailing issues and tests for iterative model improvement. Established feedback loops with data and modeling teams for faster AI enhancements. • Focused evaluations on edge cases, numeric instability, and algorithmic robustness. • Generated structured rubrics and prioritized assessment recommendations. • Identified recurring model errors and ambiguity for development triage. • Leveraged AI evaluation software and remote workflow tools for reporting.

Reviewed AI-generated Python code for correctness, performance, and clarity. Produced executive-friendly assessment reports detailing issues and tests for iterative model improvement. Established feedback loops with data and modeling teams for faster AI enhancements. • Focused evaluations on edge cases, numeric instability, and algorithmic robustness. • Generated structured rubrics and prioritized assessment recommendations. • Identified recurring model errors and ambiguity for development triage. • Leveraged AI evaluation software and remote workflow tools for reporting.

2024
Data Annotation Tech

Mathematics AI Evaluator

Data Annotation TechTextEvaluation Rating
Evaluated AI-generated solutions to advanced mathematics problems using rubric-based scoring. Diagnosed recurring reasoning faults and suggested rubric improvements for better model alignment. Ensured inter-rater reliability and consistent scoring among remote evaluators. • Emphasized conceptual understanding and rigorous step-by-step checking. • Proposed rubric and instruction updates for clearer AI model guidance. • Calibrated reviewer scoring for reliability in remote teams. • Produced detailed evaluation reports for iterative training of math AI.

Evaluated AI-generated solutions to advanced mathematics problems using rubric-based scoring. Diagnosed recurring reasoning faults and suggested rubric improvements for better model alignment. Ensured inter-rater reliability and consistent scoring among remote evaluators. • Emphasized conceptual understanding and rigorous step-by-step checking. • Proposed rubric and instruction updates for clearer AI model guidance. • Calibrated reviewer scoring for reliability in remote teams. • Produced detailed evaluation reports for iterative training of math AI.

2024 - 2025

Education

Z

Zewail City Of Science And Technology

Bachelor of Science, Physics

Bachelor of Science
2018 - 2022

Work History

A

Adslux

Senior Marketer

Romania
2022 - 2024
U

Upwork

Freelance Marketing Manager

Cairo
2018 - 2024