For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
I

Imrane T.

AI Evaluator | LLM Reasoning Analyst | Prompt & Security Testing

BELGIUM flag
N/A, Belgium
$35.00/hrIntermediateLabel StudioScale AISnorkel AI

Key Skills

Software

Label StudioLabel Studio
Scale AIScale AI
Snorkel AISnorkel AI
Internal/Proprietary Tooling
CVATCVAT

Top Subject Matter

LLM Reasoning
AI Output Validation
Adversarial Testing

Top Data Types

TextText
Computer Code ProgrammingComputer Code Programming
DocumentDocument

Top Task Types

Evaluation Rating
Question Answering
Text Generation
RLHF
Prompt Response Writing SFT
Red Teaming
Computer Programming Coding

Freelancer Overview

AI Evaluator / LLM Reasoning Analyst with hands-on experience in analyzing AI model outputs, detecting hallucinations, logical inconsistencies, and edge-case failures. Strong background in prompt engineering, structured evaluation, and response quality validation. Experience includes designing evaluation workflows, testing model reasoning capabilities, and identifying vulnerabilities in outputs. Also skilled in Python automation, web security testing (XSS, SQLi), and structured data analysis, bringing a strong analytical and adversarial mindset to AI training tasks.

IntermediateFrenchEnglish

Labeling Experience

AI Evaluator / LLM Reasoning Analyst

Text
Responsible for evaluating the reasoning, consistency, and robustness of large language models (LLMs). Performed hallucination detection, logical flaw identification, and edge-case analysis on AI outputs. Structured prompts and workflows were developed for validating model outputs and supporting adversarial testing. • Evaluated LLM outputs using structured methodologies. • Identified hallucinations, logical inconsistencies, and edge-case failures. • Designed and implemented prompts to test model reasoning and reliability. • Automated analysis workflows using Python and JSON-based tools.

Responsible for evaluating the reasoning, consistency, and robustness of large language models (LLMs). Performed hallucination detection, logical flaw identification, and edge-case analysis on AI outputs. Structured prompts and workflows were developed for validating model outputs and supporting adversarial testing. • Evaluated LLM outputs using structured methodologies. • Identified hallucinations, logical inconsistencies, and edge-case failures. • Designed and implemented prompts to test model reasoning and reliability. • Automated analysis workflows using Python and JSON-based tools.

Present

Education

I

Independent Learning

Self-directed Technical Training, Artificial Intelligence & Cybersecurity

Self-directed Technical Training
2024 - 2026

Work History

N

N/A

Technical Analyst & Automation Specialist

N/A
2024 - Present