For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Hüsnü Özaltun

Hüsnü Özaltun

Experienced AI Trainer

TURKEY flag
Artvin, Turkey
$15.00/hrIntermediateAppenLabelboxOneforma

Key Skills

Software

AppenAppen
LabelboxLabelbox
OneFormaOneForma
Scale AIScale AI

Top Subject Matter

No subject matter listed

Top Data Types

AudioAudio
ImageImage
TextText

Top Label Types

Fine Tuning
Prompt Response Writing SFT
RLHF
Text Generation
Translation Localization

Freelancer Overview

I am an experienced AI Trainer and Subject Matter Expert with a strong background in data labeling, prompt design, and AI model evaluation. I have hands-on experience training and validating large language models through real-world task creation, response review, and quality assurance, with a particular focus on natural language processing and multilingual data. My work combines pedagogical expertise with technical insight, allowing me to produce high-quality, well-structured training data that improves model accuracy and reliability. I have contributed to AI projects involving Turkish and English language content, ethical AI practices, and model optimization, and I am comfortable working independently in remote, asynchronous environments while meeting strict quality and deadline requirements.

IntermediateArabicEnglish

Labeling Experience

OneForma

Annotator

OneformaTextText GenerationTranslation Localization
Participated as an annotator in a text annotation project for LLM development involving transcription and text refinement tasks. The project focused on transforming raw spoken or semi-structured content into clean, well-formed textual data suitable for training and evaluation of language models. Responsibilities included accurate transcription, content validation, and normalization of text to meet project-specific linguistic and formatting guidelines. Emphasis was placed on maintaining semantic fidelity, handling edge cases, and ensuring consistency across annotations through careful review and quality checks to support high-quality language model training.

Participated as an annotator in a text annotation project for LLM development involving transcription and text refinement tasks. The project focused on transforming raw spoken or semi-structured content into clean, well-formed textual data suitable for training and evaluation of language models. Responsibilities included accurate transcription, content validation, and normalization of text to meet project-specific linguistic and formatting guidelines. Emphasis was placed on maintaining semantic fidelity, handling edge cases, and ensuring consistency across annotations through careful review and quality checks to support high-quality language model training.

2025 - 2025
Appen

annotator

AppenTextText GenerationTranslation Localization
Worked as an annotator on a text-based data labeling project for LLM training involving transcription, text normalization, and language quality improvement. The scope of the project included converting spoken content into accurate written text, correcting errors, and ensuring consistency in spelling, punctuation, and formatting. Additional tasks included light text generation and localization adjustments to improve readability and naturalness while preserving the original meaning. Quality measures focused on strict guideline adherence, attention to linguistic detail, and consistency checks to ensure clean, high-quality textual data suitable for large-scale language model training.

Worked as an annotator on a text-based data labeling project for LLM training involving transcription, text normalization, and language quality improvement. The scope of the project included converting spoken content into accurate written text, correcting errors, and ensuring consistency in spelling, punctuation, and formatting. Additional tasks included light text generation and localization adjustments to improve readability and naturalness while preserving the original meaning. Quality measures focused on strict guideline adherence, attention to linguistic detail, and consistency checks to ensure clean, high-quality textual data suitable for large-scale language model training.

2025 - 2025
Scale AI

contributor, reviewer, auditor

Scale AIAudioAudio Recording
Contributed to an audio data collection project for LLM training focused on natural conversational speech. As a contributor, participated in spontaneous, topic-based conversations with two to three speakers, each session lasting approximately 12 minutes, ensuring natural flow, clear articulation, and realistic dialogue dynamics. Subsequently worked as a reviewer, evaluating recorded conversations for audio quality, naturalness, topic relevance, and guideline compliance. Review tasks included identifying issues related to clarity, background noise, turn-taking, and conversational coherence, and providing structured feedback to maintain high-quality audio data standards for downstream model training.

Contributed to an audio data collection project for LLM training focused on natural conversational speech. As a contributor, participated in spontaneous, topic-based conversations with two to three speakers, each session lasting approximately 12 minutes, ensuring natural flow, clear articulation, and realistic dialogue dynamics. Subsequently worked as a reviewer, evaluating recorded conversations for audio quality, naturalness, topic relevance, and guideline compliance. Review tasks included identifying issues related to clarity, background noise, turn-taking, and conversational coherence, and providing structured feedback to maintain high-quality audio data standards for downstream model training.

2025 - 2025
Scale AI

Contributor, reviewer, auditor

Scale AITextRLHFFine Tuning
Worked on a Turkish-language LLM training project involving RLHF, supervised fine-tuning (SFT), and prompt–response writing. The scope of the project included designing high-quality prompts, generating reference responses, and evaluating model outputs to support fine-tuning and preference learning. Key tasks involved response ranking, preference annotation, and detailed justification of choices based on accuracy, relevance, linguistic quality, and policy compliance. Special focus was placed on Turkish grammar, semantics, cultural context, and instruction-following behavior. Quality standards were maintained through strict guideline adherence, consistency checks, and iterative feedback to ensure reliable, high-quality training data for large language models.

Worked on a Turkish-language LLM training project involving RLHF, supervised fine-tuning (SFT), and prompt–response writing. The scope of the project included designing high-quality prompts, generating reference responses, and evaluating model outputs to support fine-tuning and preference learning. Key tasks involved response ranking, preference annotation, and detailed justification of choices based on accuracy, relevance, linguistic quality, and policy compliance. Special focus was placed on Turkish grammar, semantics, cultural context, and instruction-following behavior. Quality standards were maintained through strict guideline adherence, consistency checks, and iterative feedback to ensure reliable, high-quality training data for large language models.

2024 - 2025

Education

D

Dicle University

Bachelor of Arts, Turkish Language and Literature

Bachelor of Arts
Not specified
M

Marmara University

Master of Arts, Contemporary Turkish Language

Master of Arts
Not specified

Work History

M

Ministery of Education

Teacher

Istanbul
2001 - Present
R

RemeConsult

Salesforce Developer

Remote
2022 - 2024