AI Data Annotation & Evaluation Contributor
As an AI Data Annotation & Evaluation Contributor at Soul AI, I performed annotation and critical evaluation of AI-generated content across diverse domains. I focused on identifying errors and inconsistencies in STEM and coding-related outputs while providing detailed recommendations for improvement. My efforts were consistently recognized for quality by internal QA reviews. • Annotated and assessed factuality, logical reasoning, and clarity in LLM outputs. • Detected inconsistencies and identified reasoning gaps in AI-generated text. • Provided improvement suggestions aligned with evaluation guidelines. • Demonstrated accuracy and attention to detail in following QA protocols.