LLM Safety Response Evaluation
This role involves assessing AI-generated responses for safety, compliance, and ethical considerations. Responsibilities include evaluating whether the model’s outputs adhere to guidelines by identifying and mitigating risks such as bias, misinformation, harmful content, or inappropriate language. The task also involves providing feedback to enhance the model’s ability to generate responsible and neutral responses while maintaining relevance and engagement.