LLM Data Annotation & Prompt Engineering Specialist
Worked as a Data Annotation Specialist contributing to the training and fine-tuning of Large Language Models (LLMs). Responsible for creating high-quality prompt and response pairs for Supervised Fine-Tuning (SFT), generating diverse question-answer datasets, and evaluating AI-generated responses based on accuracy, reasoning quality, clarity, and tone. Performed structured annotation following detailed taxonomies and quality guidelines. Tasks included ranking multiple model outputs, identifying factual inaccuracies, improving response quality, and providing human-like rewritten responses. Ensured adherence to annotation standards, bias mitigation practices, and consistency benchmarks. Contributed to RLHF (Reinforcement Learning from Human Feedback) workflows by scoring outputs and providing detailed justification for model improvements. Maintained high accuracy and met strict project deadlines while working on large-scale datasets.