Invisible Technologies — Amazon Phoenix (RLHF 3H Framework)
Evaluated large volumes of model-generated responses under the Honest–Helpful–Harmless (3H) framework for Amazon’s Phoenix RLHF pipeline. Applied hallucination detection, bias and safety screening, and factuality verification across text-based dialogue and Q&A tasks. Authored structured preference rationales that clarified trade-offs between helpfulness and safety. Contributed to a measurable ~40% improvement in factual accuracy and reduced unsafe outputs across downstream LLM deployments.