Image recognition for low speed autonomous delivery
Led the annotation of multimodal sensor data (primarily camera images) for a low-speed autonomous vehicle system, focusing on urban last-mile delivery robots. The project involved labeling 20,000+ high-resolution images captured in diverse lighting/weather conditions to train perception models for object detection, lane marking recognition, and pedestrian behavior prediction. Tasks: Bounding Boxes: Annotated vehicles, pedestrians, cyclists, and static obstacles (e.g., traffic cones, dumpsters) with tight alignment for precise localization. Semantic Segmentation: Labeled drivable surfaces, sidewalks, and crosswalks pixel-wise to support path-planning algorithms. Polygons: Detailed annotations for irregular objects (e.g., construction barriers, partially visible assets) to minimize occlusion errors. Inter-annotator Agreement (IAA): Maintained >95% consistency across a 5-annotator team via CVAT’s review workflows and overlap assignments.