3D Sensor Annotation for Autonomous Vehicle Perceptio
This project involved annotating LiDAR and camera sensor data to train perception models for autonomous driving systems. I labeled thousands of frames with 3D bounding boxes and cuboids to identify vehicles, pedestrians, cyclists, and static infrastructure. Using CVAT, I applied polygon and segmentation tools to delineate road boundaries, lane markings, and traffic signs with pixel-level precision. The dataset spanned urban, suburban, and highway environments, requiring careful attention to occlusions, motion blur, and edge cases. Quality assurance was a top priority—I followed strict labeling guidelines, participated in regular calibration reviews, and maintained over 98% accuracy across validation checks. My contributions helped improve object tracking and scene understanding for real-time decision-making in autonomous navigation.