Prompt Engineer and Task Evaluator
As a Prompt Engineer and Task Evaluator at Data Annotation Tech, I formed prompts, rated and edited responses, and applied logical reasoning with LLMs in diverse projects. My role involved reviewing others’ LLM task outputs, testing code prompts, and aligning contributions to specific project goals. I was responsible for evaluating and refining AI-generated responses in both natural language and code. • Performed evaluation and rating tasks for LLM outputs across various projects • Applied prompt engineering best practices and logical reasoning to improve model performance • Reviewed, edited, and provided feedback on AI-generated outputs and peer work • Tested AI capabilities using prompts in different programming languages and contexts.