Data Annotator
I worked on data annotation and AI training projects through platforms such as Outlier AI, contributing to large-scale machine learning initiatives involving diverse datasets. The projects typically spanned thousands of data points per batch, covering a wide range of domains including general knowledge, finance, and customer interaction scenarios. My role involved text annotation, categorisation, and evaluation of AI-generated outputs, ensuring consistency, accuracy, and alignment with detailed project guidelines. Tasks were often part of broader workflows where multiple annotators contributed to refining datasets used to train and validate AI models. The scope of these projects required working within structured annotation frameworks and quality assurance systems, where outputs were reviewed, calibrated, and continuously improved. I handled varying levels of task complexity, from straightforward classification to more reasoning-based evaluations, while meeting productivity and accuracy targets. Collaboration was indirect but impactful, as my contributions formed part of a larger pipeline involving reviewers, auditors, and model trainers. This experience strengthened my ability to work on high-volume, detail-sensitive projects, maintain consistency at scale, and deliver reliable data to support AI model development.