Data Annotator for Autonomous Vehicle Perception
Annotated a large-scale dataset of over 100,000 images and video frames to train perception models for self-driving cars. Key tasks included applying precise polygon and bounding box labels to vehicles, pedestrians, and cyclists, as well as ensuring temporal consistency in video sequences. Consistently maintained a 99.7% quality score, surpassing the project benchmark of 98.5%. Also contributed to quality assurance through peer review and improved project guidelines by identifying and documenting edge cases.