Data Annotator – Outlier AI
As a Data Annotator at Outlier AI, I contributed to the training and evaluation of large language models through text-based annotation. My responsibilities included reviewing AI-generated code, crafting prompts, and performing detailed quality checks to enhance the performance of AI systems. Diligence in following strict guidelines ensured consistently high-quality annotations and output reliability. • Executed comprehensive text annotation tasks focused on improving model accuracy. • Wrote and refined prompts to assess reasoning, coding, and language comprehension. • Reviewed model-generated code for correctness and adherence to instructions. • Conducted quality assurance checks for logical errors, hallucinations, and formatting issues.