Autonomous Vehicle Multi-Class Object Detection & Segmentation Project
Led a large-scale computer vision data labeling project focused on autonomous vehicle perception systems. The project involved annotating over 750,000 high-resolution images and 15,000+ hours of dashcam video footage for real-time object detection and road scene understanding. Specific tasks performed included: Bounding box annotation for vehicles, pedestrians, cyclists, traffic signs, traffic lights, and obstacles Polygon and semantic segmentation for lane markings, road boundaries, and sidewalks Multi-object tracking across video frames Class balancing and dataset refinement for YOLO model training Annotation conversion to YOLO, COCO, and Pascal VOC formats Edge case identification (night driving, rain, occlusion scenarios) Dataset validation and correction cycles Implemented a multi-layer quality assurance framework including: Double-blind annotation review 10% random sampling audits Automated script-based validation checks Inter-annotator agreement scoring (IAA above 9