Egocentric Annotation Program
This project involves human egocentric video annotation focused on identifying and segmenting physical actions performed from a first-person (ego) perspective. The scope of the project includes reviewing videos captured by wearable or first-person cameras and dividing them into distinct action segments that represent meaningful task steps performed by the human operator. Tasks performed include accurately identifying action boundaries, labeling segments based on observed human activities, and ensuring each segment represents a single, coherent action within a broader task workflow. Annotations are created following detailed project guidelines to maintain consistency across videos and environments. Quality measures include strict adherence to segmentation rules, careful review of action transitions, avoidance of overlapping or mixed segments, and self-validation of annotations prior to submission. The project supports the development of computer vision and embodied AI systems.