Outlier Data Annotator (Freelance)
Completed a range of AI training tasks on the Outlier platform including prompt writing, response evaluation, and ranking model outputs based on accuracy, helpfulness, and safety criteria, while annotating and labeling text data across diverse domains to support the fine-tuning of large language models (LLMs), evaluated AI-generated responses using structured rubrics to identify factual errors and logical inconsistencies, contributed to RLHF pipelines by providing human preference signals across thousands of data points, and handled multilingual tasks in Indonesian and English consistently maintaining high task acceptance rates through strict adherence to platform annotation guidelines and quality benchmarks.