An infographic of a rat with an absurdly large penis. Another shows human legs with too many bones. An introduction that begins: “Sure, here’s a possible introduction for your topic.”
These are some of the most egregious examples of AI that have recently entered scholarly journals, shedding light on the wave of AI-generated text and images flooding the academic publishing industry.
Several experts who identify problems in studies told AFP that the rise of artificial intelligence has supercharged existing problems in the multibillion-dollar sector.
All the experts emphasized that AI programs like ChatGPT can be a useful tool for writing or translating documents — if thoroughly tested and disclosed.
But that hasn’t been the case for several recent cases that somehow skipped past peer review.
Musk’s misleading campaign posts viewed 1.2 billion times: study
Earlier this year, a clearly AI graphic of a rat with impossibly huge genitalia was widely shared on social media.
It was published in a journal by the academic giant Frontiers, which later retracted the study.
Another study was retracted last month for an AI graphic showing legs with strange multi-jointed bones that look like hands.
Although these examples were images, it is believed to be ChatGPT, a chatbot launched in November 2022, that has most changed the way the world’s researchers present their findings.
A study published by Elsevier went viral in March for its introduction, which was clearly a ChatGPT prompt that said, “Sure, here’s a possible introduction for your topic.”
Such disturbing examples are rare and unlikely to pass the peer-review process in the most prestigious journals, several experts told AFP.
Expect more product placement at the Olympics, says IOC
Decline in paper mills
It is not always so easy to detect the use of artificial intelligence. But one indication is that ChatGPT tends to favor certain words.
Andrew Gray, a librarian at University College London, combed through millions of documents looking for overuse of words like meticulous, complicated or praiseworthy.
It determined that at least 60,000 documents involved the use of artificial intelligence in 2023 — more than one percent of the annual total.
“For 2024 we will see very significantly increased numbers,” Gray told AFP.
Meanwhile, more than 13,000 documents were retracted last year, by far the most in history, according to the US-based group Retraction Watch.
Artificial intelligence has allowed bad actors in scientific publishing and academia to “industrialize the overflow” of “useless” documents, Retraction Watch co-founder Ivan Oransky told AFP.
Such bad actors include what are known as paper mills.
These “fraudsters” sell the writing to researchers, pumping out huge amounts of very poor quality, plagiarized or fake papers, said Elisabeth Beek, a Dutch researcher who detects scientific manipulation of images.
After AI, quantum computer looks at ‘Sputnik’ moment.
Two percent of all studies are thought to be published by paper mills, but the percentage is “exploding” as artificial intelligence opens the floodgates, Bick told AFP.
This problem was highlighted when academic publishing giant Wiley bought troubled publisher Hindawi in 2021.
Since then, the US firm has recalled more than 11,300 documents related to specific Hindawi topics, a Wiley spokesman told AFP.
Wiley has now introduced a “paperwork detection service” to detect AI misuse — powered by AI.
“Vicious circle”
Oransky stressed that the problem was not just the paper industry, but a broader academic culture that pushes researchers to “publish or perish.”
“Publishers have made 30 to 40 percent margins and billions of dollars in profit by building these systems that require volume,” he said.
The insatiable demand for more and more papers is piling pressure on academics who are ranked by their output, creating a “vicious cycle”, he said.
X’s AI chatbot spread election misinformation, US officials say
Many have turned to ChatGPT to save time — which isn’t necessarily a bad thing.
Because almost all papers are published in English, Bik said AI translation tools can be valuable to researchers — including herself — for whom English is not their first language.
But there are also fears that mistakes, inventions and unintentional plagiarism by artificial intelligence could further erode society’s trust in science.
Another example of AI being misused came last week when a researcher discovered what appeared to be a rewritten version of ChatGPT of his own study that had been published in an academic journal.
Samuel Payne, a professor of bioinformatics at Brigham Young University in the United States, told AFP he was asked to peer review the study in March.
After realizing it was “100 percent plagiarism” of his own study — but with the text apparently reworded by an artificial intelligence program — he rejected the paper.
Relatives, nonsense or just plain MAD? Warnings for AI models are increasing
Payne said he was “shocked” to discover the plagiarism had simply been published elsewhere, in a new Wiley journal called Proteomics.
It has not been recalled.
Source: AFP