The Conversation : "Problematic Paper Screener: Trawling for fraud in the scientific literature"
Research
On January 31, 2025
The Problematic Paper Screener trawls through 130 million scholarly papers every week looking for telltale signs that papers were produced by paper mills. winhorse/iStock via Getty Images
Science sleuths are stepping up efforts to detect bogus science papers. This includes building tools that comb through millions of journal articles for signs of tortured phrases spawned by AI.
They typically result from using paraphrasing tools to evade plagiarism-detection software when stealing someone else’s text. The phrases above are real examples of bungled synonyms for the United States, breast cancer, kidney failure, artificial neural networks, and lactose intolerance, respectively.
We are a pair of computer scientists at Université de Toulouse and Université Grenoble Alpes, both in France, who specialize in detecting bogus publications. One of us, Guillaume Cabanac, has built an automated tool that combs through 130 million scientific publications every week and flags those containing tortured phrases.
In addition to tortured phrases, the Problematic Paper Screener flags ChatGPT fingerprints: snippets of telltale text left behind by the AI agent.Screenshot by The Conversation, CC BY-ND
Several publishers use our paper screener, which has been instrumental in more than 1,000 retractions. Some have integrated the technology into the editorial workflow to spot suspect papers upfront. Analytics companies have used the screener for things like picking out suspect authors from lists of highly cited researchers. It was named one of 10 key developments in science by the journal Nature in 2021.
So far, we have found:
Nearly 19,000 papers containing at least five tortured phrases each.
More than 280 gibberish papers – some still in circulation – written entirely by the spoof SCIgen program that Massachusetts Institute of Technology students came up with nearly 20 years ago.
More than 764,000 articles that cite retracted works that could be unreliable. About 5,000 of these articles have at least five retracted references listed in their bibliographies. We called the software that finds these the “Feet of Clay” detector after the biblical dream story where a hidden flaw is found in what seems to be a strong and magnificent statue. These articles need to be reassessed and potentially retracted.
More than 70 papers containing ChatGPT “fingerprints” with obvious signs such as “Regenerate Response” or “As an AI language model, I cannot …” in the text. These articles represent the tip of the tip of the iceberg: They are cases where ChatGPT output has been copy-pasted wholesale into papers without any editing (or even reading) and has also slipped past peer reviewers and journal editors alike. Some publishers allow the use of AI to write papers, provided the authors disclose it. The challenge is to identify cases where chatbots are used not just for language-editing purposes but to generate content – essentially fabricating data.
Université Grenoble Alpes is a founding partner of The Conversation online media. This website combines academic expertise and journalistic know-how to provide the general public with free, independent, quality information. The articles, in a short format, deal with current affairs and social phenomena. They are written by researchers and academics in collaboration with a team of experienced journalists.
Share the linkCopyCopiedClose the modal windowShare the URL of this pageI recommend:Consultable at this address:La page sera alors accessible depuis votre menu "Mes favoris".Stop videoPlay videoMutePlay audioChat: A question? Chatbot Robo FabricaMatomo traffic statisticsX (formerly Twitter)