data anotator
Project Overview: The "Street-Smart" Vision InitiativeThe scope of this project was to develop a robust training dataset for an autonomous delivery drone system. We weren't just looking for "cars" and "trees"; we needed to teach the model to understand nuance—the difference between a stationary trash can and a pedestrian standing still, or the distinction between a clear path and a glass door.Specific Data Labeling TasksTo get the level of detail required, our team performed three primary types of annotation:2D Bounding Boxes: Identifying all mobile actors (vehicles, cyclists, pedestrians) to establish spatial awareness.Semantic Segmentation: This was the heavy lifting. We pixel-masked static environments—sidewalks, roads, and "no-go" zones—to ensure the drone understood traversable surfaces.Keypoint Annotation: For human figures, we mapped joints (shoulders, knees, ankles) to help the model predict intent, such as whether a person is about to step off a curb.Project Scale and VolumeTh