Researchers Are Hiding Prompts in Papers to Trick AI Reviewers
Some academics are secretly embedding prompts into research papers to sway AI-generated peer reviews. The result is a rise in biased, overly favorable feedback—and growing concern over the integrity of the academic publishing process.
A new form of peer review manipulation is emerging as AI becomes a common assistant in academic workflows. Researchers have begun hiding prompts inside the text of their papers—often using invisible formatting like white text on a white background—to influence how language models like ChatGPT review their work. These prompts instruct AI to write positive or uncritical reviews, skewing the evaluation process when reviewers unknowingly feed the content into AI systems.
This tactic exploits the increasing reliance on AI for drafting reviews, especially in fast-moving fields like machine learning. Studies have shown that up to 17 preprints were found using such embedded directives, leading to disproportionately positive assessments. Since these instructions are invisible to human reviewers, they pass undetected, introducing serious questions about fairness and transparency.
The implications are far-reaching. Journals and conferences may inadvertently accept substandard work based on artificially inflated reviews. Honest researchers are put at a disadvantage. Peer review, once the gold standard of scientific vetting, now faces a new kind of adversarial prompt engineering. As AI becomes further integrated into academic workflows, institutions must act swiftly to detect hidden manipulations and enforce disclosure policies.
Pure Neo Signal:
Let’s stop pretending this is clever. Researchers are literally hiding instructions in their papers, using white-on-white text like it’s 1999 HTML spam, telling ChatGPT to write glowing reviews. And reviewers, in a rush or just out of habit, are pasting entire manuscripts into AI tools without realizing they’ve just handed over the mic to a rigged script. What you get back isn’t a review. It’s a PR blurb ghostwritten by a chatbot that’s been duped by invisible ink.
The scary part? This is working. Papers are getting better reviews not because they’re strong, but because the AI was coached to say nice things. It’s not peer review anymore. It’s peer prompt manipulation. And if journals and conferences don’t crack down, we’re going to end up with a scientific record curated by the most prompt-savvy, not the most rigorous.
We love
and you too
If you like what we do, please share it on your social media and feel free to buy us a coffee.