AI Model Evaluation & Technical Data Annotation (SaharaAI)
Contributed as an AI Model Evaluator for SaharaAI (Los Angeles, USA) within their decentralized Data Services Platform. Over a 6-month period, I executed Reinforcement Learning from Human Feedback (RLHF), specifically rating the accuracy and safety of model-generated text outputs. Leveraging my Physics-trained analytical background, I specialized in identifying logical hallucinations and technical errors in STEM-related datasets, ensuring high-fidelity knowledge assets for on-chain attribution.