AI Research & Evaluation Practitioner
Collaborated with AI systems and large language models (LLMs) to analyze, review, and refine outputs for accuracy, clarity, and logical consistency across diverse technical and market research domains. Utilized prompt engineering, structured research, and multi-source verification to assess and improve AI-generated outputs. Applied evaluation frameworks and provided structured feedback for improving reasoning, factual correctness, and bias detection. • Reviewed outputs for logical soundness, factual grounding, and absence of reasoning errors. • Designed prompt templates and evaluation rubrics tailored to real-world scenario requirements. • Conducted web research and fact verification to support evidence-based assessment. • Delivered comprehensive written feedback, supporting iterative improvement of AI systems.