Source Verification for LLM Factual Accuracy
Executed specialized Quality Control (QC) tasks focused on validating the factual accuracy of AI-generated text. The scope involved meticulously analyzing every factual data point provided by the LLM. Responsibilities included conducting targeted online research to locate at least one official and reliable source for verification. Successfully detected and documented AI hallucinations, and classified the text as either correct or factually inaccurate based on external evidence. All evaluations required precise citation logging (URL pasting) to justify the final assessment and enhance the model's reliability metrics.