Bias Detection in Language Models - University Research Lab Project
Developed a Python toolkit for detecting biased language patterns in conversational AI training datasets. The toolkit was used within a university research lab to analyze and identify sources of bias during dataset preparation phases. This involved examining large corpora of text and categorizing language patterns for machine learning applications. • Processed conversational text data for analysis • Applied classification methods to identify biases • Contributed to dataset prepping for AI training • Supported peers in research applications