Speech Emotion Detection Deep Learning Developer
Developed and trained a deep learning model to classify emotional states from speech data as part of a mental health application. Enhanced the emotion recognition process through techniques like noise injection and pitch alteration, utilizing MFCC and MelSpectrogram features. Oversaw the training of a CNN model to categorize various emotions within audio samples. • Labeled and preprocessed speech samples for emotional states. • Extracted important acoustic features for use in training. • Supervised model training and performance evaluations. • Classified multiple emotional states to improve model accuracy.