AI Data Labeling & Prompt Evaluation for LLM Training
Worked on data annotation and evaluation for large language models. Tasks included reviewing AI-generated responses, classifying outputs based on quality and safety guidelines, improving prompts, and validating reasoning accuracy. Followed strict quality assurance processes and met weekly productivity targets. Ensured consistency, factual accuracy, and compliance with platform guidelines.