Ai Trainer Evaluator and Annotator
Subjective evaluation of AI-generated Italian dubbing audio for AI model development. Dubbing quality assessment based on multiple criteria: Instability and meaningfulness Voice naturalness Prosody and intonation Voice similarity (similarity of the voice to the original/target) Audio quality degradation Text-to-speech similarity (alignment/synchronization between the script text and the spoken audio) Used the proprietary SRT Halo platform. Followed competitive benchmarking and monolingual subjective evaluation guidelines. Average execution time ~480 seconds. Contributed to the improvement of AI-based dubbing systems through detailed human evaluations.