AI Model Annotator | SaharaAI
As an AI Model Annotator at SaharaAI, I contributed to model optimization by applying Reinforcement Learning from Human Feedback (RLHF) techniques. My work involved evaluating AI-generated responses for factual accuracy, safety, and coherence to provide detailed performance feedback. I consistently ranked among the top global contributors for high-quality data entries. • Executed structured RLHF-based annotation tasks on the Sahara Testnet. • Assessed model outputs for hallucinations, inappropriate content, and quality. • Provided detailed, actionable feedback to drive improvements in model performance. • Participated in a fast-paced, data-driven environment focused on continuous quality improvement.