AI Data Trainer | Fact-Checking & Response Evaluation
In this role at Soul AI, I focus on fact-checking AI responses to ensure their accuracy and reliability. The project involves reviewing a prompt and determining whether the AI-generated response is understandable. Once I confirm that, I then assess two potential responses from the AI. I analyze them for factual accuracy, checking if the claims made in the responses are supported by credible, verifiable sources. The evaluation includes checking for hedging (where the AI avoids committing to a definitive answer), canned responses (where the model gives vague, unhelpful answers), and the overall clarity and completeness of the responses. My task also involves ranking the responses based on their factual accuracy and explaining the rationale behind my choices, using reliable online sources to back up my findings. Additionally, I rate the accuracy of each response on a scale from “Accurate” to “Inaccurate,” providing detailed feedback and sources used for verification.