Reprocessed DetectRL Dataset
Large language models are increasingly used for many applications. To prevent illicit use, it is desirable to be able to detect AI-generated text. Training and evaluation of such detectors critically depend on suitable benchmark datasets. Several groups took on the tedious work of collecting, curating, and publishing large and diverse datasets for this task. However, it remains an open challenge to ensure high quality in all relevant aspects of such a dataset. For example, the DetectRL benchmark exhibits relatively simple patterns of AI-generation in 98.5% of the Claude-LLM data. These patterns may include introductory words such as “Sure! Here is the academic article abstract:”, or instances where the LLM rejects the prompted task.
In this work, we demonstrate that detectors trained on such data use such patterns as shortcuts, which facilitates spoofing attacks on the trained detectors. We consequently reprocessed the DetectRL dataset with several cleansing operations. Experiments show that such data cleansing makes direct attacks more difficult
The dataset will be provided in the next days here. Please direct questions to Christian Riess
PD Dr.-Ing. Christian Riess
Department of Computer Science
Chair of Computer Science 1 (IT Security Infrastructures)
- Phone number: +49 9131 85-69906
- Email: christian.riess@fau.de