Remote AI Content Evaluator & Internet Research Analyst
Reviewed and evaluated AI-generated content for relevance, accuracy, and safety across diverse topics while following structured guidelines. Performed QA checks on annotation tasks, identifying inconsistencies and ensuring dataset consistency. Provided actionable feedback to other annotators and contributed to the ongoing improvement of AI model performance. Maintained accuracy rates above 95% while processing large volumes of annotation tasks weekly. • Conducted prompt and output analysis for LLMs. • Contributed to training datasets by correcting and rewriting responses in natural language. • Verified facts through detailed internet research to ensure content reliability. • Adapted to evolving guidelines and project requirements across multiple AI programs.