For employers

Hire this AI Trainer

Sign in or create an account to invite AI Trainers to your job.

Invite to Job
Y

Yiran Wang

AI Coding & LLM Evaluation Specialist (Java, Bilingual EN/ZH)

United Kingdom flagLondon, United Kingdom
$20.00/hrIntermediateOther

Key Skills

Software

Other

Top Subject Matter

AI Reasoning / Model Evaluation
Large Language Model Evaluation for Coding Tasks
Multilingual Tasks(EN/ZH)

Top Data Types

TextText
DocumentDocument

Top Task Types

Text Generation
Transcription
Data Collection
Question Answering

Freelancer Overview

I have experience contributing to AI training and evaluation projects, including annotating, reviewing, and improving model outputs across both natural language and coding tasks. I have worked on tasks involving response ranking, reasoning validation, and identifying inaccuracies in AI-generated content, with a strong focus on clarity, logical consistency, and alignment with user intent. With a background in Software Engineering and hands-on experience in Java, Spring Boot, and full-stack development, I am particularly strong in evaluating technical responses and code quality. I am also comfortable working in English and Chinese, enabling me to handle multilingual data effectively. My combination of analytical thinking, technical knowledge, and attention to detail allows me to contribute high-quality training data for advanced AI systems.

IntermediateEnglishChinese Mandarin

Labeling Experience

AI Evaluation Contractor (Coding & Bilingual Tasks)

OtherTextQuestion Answering
Reviewed and evaluated outputs from large language models for coding and bilingual tasks, ensuring output quality and alignment. Assessed Java code generated by AI models for correctness, logical consistency, and completeness. Analyzed model responses and provided structured feedback for training data improvement. • Compared multiple model-generated outputs for instruction adherence • Rated and ranked responses according to specific rubrics • Focused on identifying logical flaws and edge-case robustness in code • Supported improvements to LLM training and data quality through feedback

Reviewed and evaluated outputs from large language models for coding and bilingual tasks, ensuring output quality and alignment. Assessed Java code generated by AI models for correctness, logical consistency, and completeness. Analyzed model responses and provided structured feedback for training data improvement. • Compared multiple model-generated outputs for instruction adherence • Rated and ranked responses according to specific rubrics • Focused on identifying logical flaws and edge-case robustness in code • Supported improvements to LLM training and data quality through feedback

2025 - Present

Education

U

University of Westminster

Master of Science, Software Engineering

Master of Science
2024 - 2025
U

University College London

Master of Science, Specialised Translation with Interpreting

Master of Science
2020 - 2021

Work History

J

JJL International Education Exchange Promotion

Study Abroad Consultant

Beijing
2022 - 2024