Data Labeling & AI Training Contributor
This role involved evaluating AI model responses for factual accuracy, logical coherence, and adherence to instructions. Various annotation tasks were performed, including ranking model outputs and annotating datasets for tone, intent, sentiment, and semantic quality. Original prompt-response pairs were developed to support conversational and reasoning abilities of large language models. • Assessed AI-generated outputs using structured rubrics across technical and general-knowledge prompts. • Labeled and tagged datasets to enable model fine-tuning and alignment for safety and intent. • Flagged hallucinations, harmful outputs, and reasoning errors in text generated by models. • Authored high-quality training data and comparative responses for RLHF and SFT objectives.