Data Annotation Specialist
I worked on a large-scale data labeling project focused on reviewing and annotating text and image datasets to improve overall data quality and usability. The scope of the project involved categorizing content, tagging key elements, identifying patterns, and flagging ambiguous or inconsistent entries based on detailed project guidelines. My tasks included classification, semantic tagging, bounding box annotation, content relevance checks, and dataset cleanup. I also handled edge-case identification, corrected mislabeled data, and ensured consistency across batches while maintaining turnaround time requirements. The project covered thousands of data points across multiple batches, requiring accuracy, speed, and strict adherence to labeling standards. Quality measures included following structured annotation guidelines, performing self-QA checks before submission, maintaining labeling consistency, and addressing reviewer feedback. I also participated in rework cycles, cross-validation tasks, and spot-check reviews to ensure high-quality output. The focus was on delivering clean, reliable, and well-structured labeled data while maintaining confidentiality, accuracy, and compliance with project requirements.