Freelance Data Annotator
As a Freelance Data Annotator, I evaluated and refined outputs of Large Language Models to enhance response accuracy, safety, and human value alignment. I engineered complex, multi-turn prompts to rigorously test generative AI reasoning and output quality. My work focused on identifying flaws and edge cases in AI models as part of a reinforcement learning with human feedback (RLHF) process. • Conducted targeted evaluation and iterative refinement of LLM outputs across a range of domains. • Developed and deployed stress-test prompts to uncover logical and reasoning gaps in AI systems. • Provided systematic feedback to improve LLM safety and adherence to ethical guidelines. • Collaborated using Data Annotation Tech software for data labeling tasks.