AI Model Evaluation & NLP Data Annotation Specialist
Worked on large-scale AI training and evaluation projects focused on improving Large Language Model (LLM) performance, safety, and response quality. Project Scope: Annotated and reviewed 1,000+ text samples. Contributed to dataset refinement for fine-tuning conversational AI systems Worked independently in a remote environment with structured QA feedback loops Quality Measures: Adhered strictly to project annotation rubrics Cross-checked responses using structured evaluation framework Maintained high inter-annotator agreement standards