Data Training Specialist, Fyxer AI
As a Data Training Specialist at Fyxer AI, I built high-quality datasets for supervised fine-tuning and reinforcement learning from human feedback (RLHF). I evaluated and annotated conversational datasets, focusing on instruction-tuning and response safety assessment. I collaborated with AI researchers to refine annotation guidelines and improved model safety by addressing harmful or biased outputs. • Tailored data labeling and annotation to specialized business domains such as legal, fintech, HR, and compliance. • Labeled datasets for enterprise tasks including compliance automation and customer support workflows. • Performed intent classification and quality assessment for business-oriented LLM use cases. • Regularly updated guidelines in response to evolving model performance and failure cases.