LLM Evaluation & Annotation Project
Text-based LLM evaluation and annotation tasks: rating model completions, rewriting prompts, checking factuality, improving instruction-following, and performing multilingual QA.
Hire this AI Trainer
Sign in or create an account to invite AI Trainers to your job.
No subject matter listed
I have hands-on experience contributing to AI training projects, including prompt editing, response evaluation, and fine-tuning outputs for large language models. Over several months, I worked on improving model responses by assessing quality, naturalness, and relevance across a variety of tasks. I'm fluent in English, Russian, and Spanish, with a strong sense of tone, clarity, and user intent. My strengths lie in prompt-response writing, translation/localization, and text generation. With an eye for detail and a structured approach, I strive to make AI outputs more accurate, human-like, and context-aware.
Text-based LLM evaluation and annotation tasks: rating model completions, rewriting prompts, checking factuality, improving instruction-following, and performing multilingual QA.
Degree, Global Economics
Service Department Support Manager
LLM Task Annotator / AI Data Contributor