Data Annotator | LLM Evaluation, Prompt Engineering, AI Training
As a data annotator at Outlier AI, I annotated and evaluated LLM outputs for multiple domains. I focused on assessing factual accuracy, coherence, and reasoning ability in model responses through structured workflows. I refined prompts and provided comparative feedback for model improvement. • Evaluated large language model outputs for quality control. • Designed and improved prompts for better model alignment. • Performed structured analysis and comparison of AI-generated responses. • Delivered detailed feedback to improve accuracy and safety.