Prompt Engineer / LLM Data Trainer
Conducted structured prompt engineering experiments to optimize large language model outputs for various task categories. Refined and evaluated LLM-generated content, directly supporting the improvement of model responses through supervised fine-tuning. Participated in AI workflow documentation to ensure reproducibility and high-quality output evaluation. • Collaborated with team members to assess and implement AI tooling for label generation and quality control. • Utilized Python-based automation for preprocessing textual data and model evaluation. • Integrated and tested OpenAI and Anthropic API endpoints for supervised model output training. • Reduced manual workflow overhead by applying structured AI quality assurance processes.