LLM DATA LABLING
Worked on AI data annotation and evaluation projects involving text generation and rating tasks. Responsibilities included reviewing AI-generated responses, labeling datasets, and rating outputs based on relevance, accuracy, and language quality. Used CloudFactory’s annotation platform to tag and evaluate large volumes of data for machine learning model training. Ensured high-quality outputs by following detailed labeling guidelines, maintaining consistency across tasks, and meeting strict quality assurance standards. Collaborated with QA teams to review feedback and improve annotation accuracy while handling large datasets within project deadlines.