Emotion Recognition Detection in Conversations using MELD
In this project, I focused on developing a multi-modal model for the automated detection of questionable or obscene content in children's media, utilizing multilingual methods of natural language processing, text/video analysis, and deep learning. A key aspect of my role involved fine-tuning the MARLIN model for Multimodal Emotion Detection and Recognition. This process required precise annotation skills to accurately label and integrate various modalities, including facial expressions, audio cues, and textual information, to enhance the model's emotion classification accuracy. Additionally, I served as a liaison and translator between Mexican and American research groups, facilitating effective collaboration and ensuring the successful integration of diverse perspectives into the project. My expertise in data annotation was crucial in training the model to recognize complex emotional cues across different languages and cultures.