Independent AI Prompt Evaluator & Annotator
As an Independent AI Prompt Evaluator & Annotator, I evaluated and ranked AI-generated responses across multiple domains with a focus on accuracy and relevance. I conducted systematic prompt-response analysis and provided quality assurance for large language model (LLM) datasets. My work emphasized reinforcement learning from human feedback (RLHF) and domain-specific fact-checking in the life sciences. • Performed structured response grading for tone, coherence, and helpfulness • Validated AI lifescience output leveraging microbiology expertise • Documented and upheld annotation guidelines for consistency • Worked across scientific, technical, and general-knowledge prompts