Research/Data Annotation Contributor – CrowdGen/Appen
As a Research/Data Annotation Contributor for CrowdGen/Appen, I performed annotation and quality evaluation of complex textual data. My work involved evidence-based reasoning and benchmarking in English using detailed, structured rationales. The tasks included assessing the clarity, accuracy, and completeness of various data samples for AI training purposes. • Applied structured analytical reasoning to annotation projects • Conducted reasoning-based evaluations specific to English-language prompts • Produced step-by-step rationales and concise conclusions for annotated data • Ensured audit-ready documentation for annotation quality