Multimango Outlier Project
Worked as an AI Annotator and Trainer for the MultiMango Project with Outlier, focusing on improving large language models through structured data labeling. Tasks included labeling, reviewing, and scoring AI-generated outputs to ensure accuracy and alignment with guidelines. Maintained high standards for annotation consistency, safety, and quality assurance throughout distributed review pipelines. • Evaluated AI-generated responses for accuracy, reasoning depth, coherence, and safety compliance. • Scored outputs with structured rubrics and detailed annotation guidelines. • Identified hallucinations, logical inconsistencies, and edge-case failures in LLM responses. • Provided qualitative feedback and assessed complex prompts to improve model performance and robustness.