AI-Generated Code Quality Review
Reviewed AI-generated Python and SQL code snippets for correctness, clarity, and structure. Evaluated code quality, identified errors, and provided detailed feedback to improve code generation models.
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
No subject matter listed
I am an experienced AI data labeler and RLHF rater with over two years of hands-on work training large language models through high-quality data annotation, response ranking, and safety evaluation. My expertise spans advanced NLP annotation, multimodal labeling for text, images, and video, as well as hallucination detection and detailed semantic feedback to optimize model performance. I am skilled in evaluating Python and SQL code for correctness and clarity, maintaining strict quality standards across thousands of labeled examples, and documenting every decision for transparency and consistency. My technical toolkit includes RLHF annotation platforms, n8n, Apify, and Google Workspace, and I thrive in fast-paced environments that demand precision, reliability, and a deep understanding of AI data workflows.
Reviewed AI-generated Python and SQL code snippets for correctness, clarity, and structure. Evaluated code quality, identified errors, and provided detailed feedback to improve code generation models.
valuated and ranked AI-generated responses for quality, factual accuracy, safety, and policy compliance. Performed preference ranking across thousands of prompt-response pairs to improve LLM performance. Identified hallucinations, harmful content, and style inconsistencies.
Labeled and categorized images and video content for computer vision models. Created detailed image descriptions, tagged objects, and performed quality checks on visual datasets. Ensured accuracy and consistency across large-scale multimodal annotation tasks.
Specialization, Software Engineering
Bachelor of Science, Mathematics & Chemistry
Founder & Automation Specialist
Data Verification Specialist