Sensor and Image Annotation for Autonomous EV Delivery Robot Navigation
I worked on a nationally funded project to develop an Autonomous EV Delivery Robot capable of navigating indoor/outdoor environments. My role included annotating sensor and camera data for object detection (obstacles, road/path markers), classifying terrain types, and tracking moving objects (e.g., pedestrians, other bots). I labeled over 10 hours of video data and thousands of sensor logs to support training of computer vision and path-planning models. I also contributed to map-based annotation for routing and SLAM (Simultaneous Localization and Mapping), ensuring alignment with navigation logic.