Text & Code Annotation for AI Model Training
Contributed to AI model improvement by evaluating and rating model-generated responses across a range of tasks from general knowledge questions to coding problems in Python and JavaScript. Work involved comparing multiple AI outputs and selecting or rewriting the better response based on accuracy, clarity, and helpfulness. Also wrote original prompt-response pairs used for supervised fine-tuning, making sure responses were detailed, well-structured, and free of hallucinations. Followed strict quality guidelines and applied consistent judgment even on tricky or ambiguous inputs. Maintained strong quality scores throughout, with most submissions passing QA on the first review.