EGO-HOW-TO
I worked on the EGO-HOW-TO project, labeling first-person videos to help AI models understand daily human activities. Using annotation tools, I applied bounding boxes, tracked hand-object interactions, and labeled actions like grabbing, cutting, or pouring. The project required frame-accurate tracking, attention to detail, and consistent application of complex activity labels. My work helped improve AI systems used in robotics, AR, and assistive tech.