Hoglee
The Hoglee project created annotated video datasets to advance computer vision and multimodal AI models, targeting human behavior analysis, affective computing, surveillance, and emotion-aware systems with labeled data for emotions, actions, and movements in diverse scenarios. I annotated short to medium clips: emotion recognition (labeling facial/body expressions for happy, sad, angry, surprised, neutral across frames); action recognition (classifying/timestamping walking, running, gesturing, object interaction, complex behaviors like hugging); tracking (bounding box/keypoint for consistent identity/trajectory, integrated with labels). Tasks followed guidelines for categories, confidence, and edge cases using tools. I contributed to thousands of diverse clips for robust AI training. Achieved 95%+ agreement and 99%+ precision via strict adherence, reviews, metrics, feedback, and ethics compliance.