December 7th 2021

Dataset presented at NEURIPS 2021

Machine learning algorithms require large datasets of labeled for training. In practice, such datasets are often unavailable and difficult to create. As part of the MODERAT! project, we have created two labeled datasets named RP-Mod and RP-Crowd. These datasets help in training models for classifying user comments at Rheinische Post, a German news outlet. The dataset and corresponding analyses have been presented at this year's NEURIPS, which was held virtually.

Paper Abstract

Abuse and hate are penetrating social media and many comment sections of news media companies. These platform providers invest considerable efforts to moderate user-generated contributions to prevent losing readers who get appalled by inappropriate texts. This is further enforced by legislative actions, which make non-clearance of these comments a punishable action. While (semi-)automated solutions using Natural Language Processing and advanced Machine Learning techniques are getting increasingly sophisticated, the domain of abusive language detection still struggles as large non-English and well-curated datasets are scarce or not publicly available. With this work, we publish and analyse the largest annotated German abusive language comment datasets to date. In contrast to existing datasets, we achieve a high labelling standard by conducting a thorough crowd-based annotation study that complements professional moderators’ decisions, which are also included in the dataset. We compare and cross-evaluate the performance of baseline algorithms and state-of-the-art transformer-based language models, which are fine-tuned on our datasets and an existing alternative, showing the usefulness for the community.


The aforementioned dataset is publicly available from Zenodo:

Share Post