A growing number of researchers are embedding invisible or machine-readable text into scientific papers—such as white-colored font or micro-sized characters—to manipulate AI-powered peer review tools, according to academic watchdogs.
These concealed instructions, which are often undetectable to the human eye, are being picked up by artificial intelligence systems used to screen and summarize academic content on preprint servers like arXiv and bioRxiv. Some of these messages reportedly contain self-promotional language or misleading metadata designed to artificially boost visibility, ranking, or acceptance.
In response, several studies found to contain these hidden elements will be withdrawn from major preprint repositories, as concerns rise over the integrity of automated review pipelines.
“This kind of manipulation, while technically clever, undermines the purpose of peer review and erodes trust in scientific publishing,” said a spokesperson from a leading academic consortium.
As the use of AI expands across academic publishing, experts are calling for new safeguards and detection tools to ensure research quality isn’t compromised by algorithmic exploitation.
0 Comments