StereoSet measures racism, sexism, and other forms of bias in AI language models

AI researchers from MIT, Intel, and Canadian AI initiative CIFAR have found high levels of stereotypical bias from some of the most popular pretrained models like Google’s BERT and XLNet, OpenAI’s GPT-2, and Facebook’s RoBERTa. The analysis was performed as part of the launch of StereoSet, a data set, challenge, leaderboard, and set of metrics […]