Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate

We demonstrate machine learning models have major challenges identifying hateful content using emoji. We use adversarial data collection and build a greatly improved model.

No items found.

Detecting online hate is a complex task, and low-performing models have harmful consequences when used for sensitive applications such as content moderation. Emoji-based hate is an emerging challenge for automated detection. We present HatemojiCheck, a test suite of 3,930 short-form statements that allows us to evaluate performance on hateful language expressed with emoji. Using the test suite, we expose weaknesses in existing hate detection models. To address these weaknesses, we create the HatemojiBuild dataset using a human-and-model-in-the-loop approach. Models built with these 5,912 adversarial examples perform substantially better at detecting emoji-based hate, while retaining strong performance on text-only hate. Both HatemojiCheck and HatemojiBuild are made publicly available. See our Github repository. HatemojiCheck, HatemojiBuild, and the final Hatemoji Model are also available on HuggingFace.

How to cite:  Hannah Rose Kirk, Bertram Vidgen, Paul Röttger, Tristan Thrush, Scott A. Hale. (2022). Hatemoji: A Test Suite and Adversarially-Generated Dataset for Benchmarking and Detecting Emoji-based Hate. In the Proceedings of 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL 2022).

Highlighted numbers

No items found.

Funded by

No items found.

Partners

No items found.