LLM Response Evaluation & Text Data Annotation Specialist
Worked on structured text annotation and AI response evaluation tasks designed to improve large language model performance. Evaluated AI-generated responses for factual accuracy, coherence, clarity, logical consistency, bias detection, and policy compliance. Performed text classification, sentiment tagging, and intent labeling using detailed annotation guidelines. Compared multiple model outputs and ranked them based on quality, instruction adherence, and safety standards. Applied reinforcement learning from human feedback (RLHF) principles by providing structured ratings and feedback to enhance model alignment and reduce hallucinations. Maintained high consistency, accuracy, and strict adherence to project rubrics under deadline-driven environments.