LLM Text Annotation & Quality Assurance with Invisible Technologies
Collaborated with Invisible Technologies on cutting-edge LLM annotation projects, labeling large-scale text datasets for natural language understanding and generation. Conducted quality assurance to ensure annotation accuracy and guideline adherence, improving dataset quality for model fine-tuning and reinforcement learning from human feedback (RLHF). Participated in refining annotation guidelines and evaluating chain-of-thought reasoning to enhance AI model reasoning capabilities.