Autonomous Driving Object Detection Annotation Project
Led high-precision annotation for a large-scale autonomous driving dataset (over 150,000 images) sourced from urban and highway driving footage. Performed detailed bounding box and semantic/instance segmentation on diverse objects including vehicles (cars, trucks, buses, motorcycles), pedestrians, cyclists, traffic signs, road markings, and obstacles. Applied multi-class attribute labeling for occlusion, truncation, direction of movement, and vehicle state (e.g., parked/moving, emergency lights on/off). Achieved consistent inter-annotator agreement >98% through rigorous guideline adherence, regular calibration sessions, and double-blind reviews. Handled challenging edge cases such as low-light/night scenes, heavy rain, partial occlusions, and rare objects (e.g., construction equipment, animals on road). Contributed to iterative guideline improvements that reduced labeling errors by 35% in subsequent batches. This project supported training and validation of perception models for Level