Video Data Annotator
I specialize in Human Egocentric Video Understanding, contributing to the development of AI models for wearable technology and augmented reality systems. My core responsibility involves rigorous segment of frame analysis to perform segmentation, identifying precise start and end timestamps for fine grained human actions and object interactions. This work requires a deep focus on temporal details to ensure the AI can accurately recognize complex behaviors from a first-person perspective. To ensure dataset integrity, I adhere to a strict dual-mode annotation schema, applying either "Dense" labeling for rapid, manipulative interactions or "Coarse" labeling for broader spatial movements, while enforcing rigid exclusion rules to prevent mixed styles within a single episode. I consistently maintain high acceptance rates by tracking object consistency and conducting thorough self audits to ensure all actions and segments meet required standard.