Self-Directed AI Literacy & Platform Familiarization
This experience involved exploring popular AI training and annotation platforms and practising the rating of AI-generated responses. The focus was on assessing AI outputs for quality, accuracy, and guideline compliance, simulating real-world annotation workflows. Comprehensive familiarity was developed with rubric-driven evaluation processes and data labeling environments. • Evaluated AI-generated text for relevance, correctness, and adherence to detailed rubric instructions. • Investigated and practised annotation tasks on platforms like DataAnnotation, Outlier, and Remotasks. • Developed skill in understanding annotation guidelines, criteria, and edge cases unique to each platform. • Consistently simulated industry-standard QA and review tasks for AI training purposes.