Insider Brief
- An algorithm that detects duplicated images in papers faster and more accurately than humans has been formulated.
- Independent biologist Sholto David studied hundreds of research papers for duplicated images, comparing his manual findings to those of an AI tool.
- While the AI operated much faster and found the majority of suspicious papers David identified, it also discovered 41 he missed, showcasing its potential in detecting doctored images.
RESEARCH NEWS — October 3, 2023 /Nature— Scientific image sleuth Sholto David blogs about image manipulation in research papers, a pastime that has exposed him to many accounts of scientific fraud. But other scientists “are still a little bit in the dark about the extent of the problem”, David says. He decided he needed some data.
The independent biologist in Pontypridd, UK, spent the best part of several months poring over hundreds of papers in one journal, looking for any with duplicated images. Then he ran the same papers through an artificial-intelligence (AI) tool. Working at two to three times David’s speed, the software found almost all of the 63 suspect papers that he had identified — and 41 that he’d missed. David described the exercise last month in a preprint1, one of the first published comparisons of human versus machine for finding doctored images.
The findings come as academic publishers reckon with the problem of image manipulation in scientific papers. In a 2016 study2, renowned image-forensics specialist Elisabeth Bik, based in San Francisco, California, and her colleagues reported that almost 4% of papers she had visually scanned in 40 biomedical-science journals contained inappropriately duplicated images.
Not all image manipulation is done with nefarious intent. Authors might tinker with images by accident, for aesthetic reasons or to make a figure more understandable. But journals and others would like to catch images with alterations that cross the line, whatever the authors’ motivation. And now they are turning to AI for help.
Fraud busters
Some 200 universities, publishers and scientific societies already rely on Imagetwin, the tool that David used for his study. The software compares images in a paper with more than 25 million images from other publications — the largest such database in the image-integrity world, according to Imagetwin’s developers.
Bik has been using Imagetwin regularly to supplement her own skills and calls it her “standard tool”, although she emphasizes that the AI has weaknesses as well as strengths — for instance, it can miss duplications in images with low contrast. She and David both get free access to the software from ImageTwin AI, the Vienna-based company that developed Imagetwin, and provide feedback to the developers.
Journals adopt AI to spot duplicated images in manuscripts
Some publishers have turned to other AI tools. Journals published by the American Association for Cancer Research in Philadelphia, Pennsylvania, screen papers with the AI tool Proofig. Frontiers in Lausanne, Switzerland, has developed its own software to check papers for its family of journals. A spokesperson for Springer Nature, which publishes Nature, says that the company is “continuing to explore and develop tools for image checking”. (Nature’s news team is editorially independent of its publisher.)
Part of the draw of Imagetwin, specialists say, is that it looks for duplications in two ways. The software makes “something like a fingerprint” for every image in a paper, says Patrick Starke, one of its developers. It then scans the entire paper for repeats of that fingerprint. It also scans its large database to see whether that fingerprint appears in past papers — a process that takes only five to ten seconds.
A long slog
For his study, David sifted through more than 700 papers with relevant images published between 2014 and 2023 in Toxicology Reports, a journal he chose in part because it contains a lot of pictures and in part because in 2021, the journal’s publisher, Elsevier in Amsterdam, added an expression of concern to an entire special issue of the journal.
After checking the papers visually, David gave the AI a go and found that it worked “much faster than me very carefully staring at images for a long time”, although it did miss four papers that he had flagged. All told, there were duplications in around 16% of the analysed papers with relevant images.
That’s considerably higher than the 4% Bik calculated, but she says David’s figure isn’t surprising. In her analysis, individual journals had duplications in between 0.3% and 12% of their papers, with higher-impact journals tending to have fewer duplications.
The science institutions hiring integrity inspectors to vet their papers
It is “entirely plausible” that 16% of a journal’s images could include duplications, agrees Jana Christopher, an image-integrity analyst at FEBS Press in Heidelberg, Germany, who has free access to Imagetwin and uses it along with other software. In her work looking over papers before they are published, Christopher flags about one-third for further investigation.
David posted his study on the preprint server bioRxiv on 5 September. It has not yet been peer reviewed. An Elsevier spokesperson says: “We are aware of the preprint and have launched an internal investigation that is under way at this time.” The editor-in-chief of Toxicology Reports, Lawrence Lash, says he has nothing to add to that statement.
Bik finds Imagetwin especially useful for “complex figures with lots of panels”. It can perform nearly instant scans of Images that can take her more than half an hour to dissect on her own.
“It’s really nice to have a software as a second pair of eyes,” agrees Christopher. But like Bik, she says Imagetwin has its shortcomings. “I often find additional [problems] that aren’t duplications and even duplications that the software didn’t flag,” Christopher says.
Part of the process
The end goal, Christopher says, is to incorporate AI tools such as Imagetwin into the paper-review process, just as many publishers routinely use software to scan text for plagiarism. But AI on its own isn’t enough. “You have to use your own expertise and question these things. None of the flags you receive [from Imagetwin] are a definite ‘This is fraud,’” she says.
Starke says that universities are using Imagetwin to review papers that their faculty members are submitting to journals. He declined to provide detailed numbers or to name any of the software’s users.
Christopher hopes that the roll-out of more AI tools could democratize the ability for journals to screen papers. “I think we need to shed the idea that it’s a luxury — it actually adds value to the journal.”
doi: https://doi.org/10.1038/d41586-023-02920-y
References
- David, S. Preprint at bioRxiv https://doi.org/10.1101/2023.09.03.556099 (2023).
2. Bik, E. M., Casadevall, A. & Fang, F. C. mBio 7, e00809–16 (2016).
Featured image: Credit: Alexandra_Koch from Pixabay