Developer—Audio Labeling for Voice Emotion Analyzer
Developed an audio-based emotion detection system using speech data and AI models. Labeled and annotated speech audio clips with emotion categories for model training and validation. Used Librosa and PyTorch for preprocessing, feature extraction, and annotation. • Created labeled datasets mapping audio clips to emotional states • Worked with pre-trained Wav2Vec 2.0 models on labeled data • Performed manual validation and adjustment of emotion labels • Applied audio analysis and filtering for high-quality annotation