AI Text Evaluation & Data Annotation (Freelance/Remote)
I worked on remote AI training tasks that were focused on evaluating and improving AI-generated text responses for large language models. My responsibilities included rating responses for correctness, clarity, relevance, tone, and policy compliance using detailed rubrics. I also performed prompt-response evaluations, identified hallucinations and factual errors, and provided structured feedback to improve model alignment. The tasks also included text annotation, response rewriting, summarization checks, and transcription-based quality reviews. I followed strict project guidelines, maintained consistency across large task volumes, and ensured high-quality outputs suitable for model fine-tuning and reinforcement learning workflows.