Why authors aren’t disclosing AI use…

… and what publishers should (not) do about it

Avi Staiman, founder and CEO of Academic Language Experts and a close observer of how researchers use AI, takes a hard look at a widening gap in scholarly publishing: expectations are rising, but transparency is not keeping pace. In this op-ed, he explores why one of the sector’s most urgent conversations may be missing the mark — and why some well-intentioned responses could do more harm than good. If authors, editors, and publishers are not working from the same assumptions, what exactly is already slipping through the cracks?

This article is the first installment of a two-part series on AI disclosures in scholarly publishing.

jumping-arrow-white

photo / Video: AI generated, Freepik

When the initial hysteria over ensuring AI didn’t become an author died down, publishers quickly understood that they needed to carefully walk a fine line in response to authors’ use of AI tools; they couldn’t reject AI use outright, but also couldn’t allow for it to be used unchecked. Most publishers subsequently put out vague and simplistic ‘declaration requirements’ in the hope that author transparency and declarations would become a panacea for issues or hallucinations that were ultimately published. Some went as far as suggesting that failure to report should be considered research misconduct.

The implicit expectation was that if authors declared to journals how and where they used AI tools as part of the research process, editors and peer reviewers would be able to critically consider those uses and make a determination regarding their efficacy and appropriateness. This was expected to enable innovation while preserving the integrity of the published literature.

However, journal editors I speak with report a fascinating (albeit somewhat expected) development. They see telltale signs of AI use popping up throughout submissions, including, but not limited to, hallucinated references, tortured words and phrases, near-perfect English that lacks substance, extensive em dashes, and more. Editors find themselves spending considerable time sifting through and rejecting subpar and AI-generated work, which is not always discernible at first glance.

With 62 % of researchers reporting using AI at some point in their research and publication process, we would expect editors to see a plethora of AI declarations and disclosures. Yet, only a negligible percentage of authors seem to actually be disclosing their AI use. Emerging research seems to show that there is a wealth of published papers in which authors use AI without disclosure. One study randomly reviewed 200 articles and only found two author declarations. This early research reveals a vast gulf between guidelines and compliance.

Why are researchers not disclosing their AI use?

Over the last three years, I have travelled across Europe, training over 30,000 researchers on the responsible use of AI in research. I often ask them if they report their AI use when submitting to journals, and if not, why. They often confess that they are using AI for a variety of tasks across the research and publication process a lot more than they like to admit. Several key reasons help explain authors’ non-compliance and warrant attention from publishers.

  1. Fear of negative repercussions

The main reason researchers don’t report AI usage is out of fear that it will be held against them. Given the numerous gatekeepers that any submission passes through (desk editor, chief editor, peer reviewers), all it takes is for one individual with a negative view of AI to recommend rejection. Ironically, many researchers seem to have embraced AI for themselves, but still perceive it as a tool for cutting corners or as a real threat to research integrity when employed by their peers. Much like deciding whether one is “fit to drive” after having a few drinks, researchers trust their own judgment about their own AI use but not that of others. So long as using AI in research is stigmatized, there will be no breakthrough with transparent disclosures and reporting.

Even when publishers present themselves as neutral, authors may still fear that disclosing AI use could quietly shape editors’ perceptions of rigor, originality, or scholarly competence. The result is an uneven compliance environment, where those with the least influence feel the strongest incentive to withhold disclosure.

  1. Lack of clarity about how/what/when/where to report

Publishers and journals vary widely in their standards for reporting the use of AI tools, leading to confusion among authors. For example, many journals require only general use disclosure without extensive documentation, while others demand creating new appendices to accompany the article, including screenshots of any chatbot interactions. The ACM guidelines, for example, call for a comprehensive submission of all AI chats related to the research. One early-career researcher I spoke with described the personal determination he makes regarding whether to report; if he uses AI to perform a task more efficiently, a task he could have done himself, he doesn’t report it.

  1. Burden lacking incentive

Because reporting standards vary so widely across publishers and journals, authors see new reporting requirements as an undue burden for which they have neither the time nor the energy. Often, it is unclear where and how authors are expected to report AI use, adding to their existing frustration with editorial systems. Proper disclosure under the current guidelines can easily turn into hours of additional work. As we have learned regarding data sharing, meaningful compliance can emerge only if publishers properly incentivize good practice and/or punish bad practice.

Even if researchers are willing to disclose AI use, they aren’t always certain which use cases are required. Is reporting required if AI is used to help locate relevant literature for the Introduction? Do researchers now need to disclose using Google Scholar Labs but not traditional Google Scholar? What if they used AI to help phrase a section of the discussion? What if the entire research question was AI’s idea in the first place? Most guidelines don’t go into this level of resolution, leaving researchers unclear on expectations.

  1. Not aware AI is being used

Many major technology providers have incorporated AI into existing products in clever ways that are nearly seamless. Universities I train often have the full Microsoft suite and enjoy Copilot AI as part of their default search, email drafts, and text drafting. Authors can easily use AI within any of these platforms without being fully aware that they are employing AI. Even if they do know, documenting and recording every step of the creative process when in the midst of research is rarely foremost in their minds, and might even inhibit creativity. For early adopters, documenting AI use is approaching the point of attempting to document Google searches in the research process.

  1. The confusion between AI and plagiarism

Many researchers I speak with perceive AI use in their writing as analogous to plagiarism. As a result, authors tend to focus on making AI use undetectable (rather than transparent) through paraphrasing and purging the texts of their obvious AI signatures. At Academic Language Experts (the author services company I own and manage), we are seeing a surge in requests to de-AI texts that have been written or translated using AI, either because they fear that they will fail AI detection tests or that a reasonably discerning editor will understand that AI was used.

  1. Policies lack teeth

Some authors know that publishers don’t carefully check for AI signs in their research and are willing to take the risk of using AI tools without disclosure because of the benefits these tools provide them. This confidence is reinforced when such use goes unchallenged.


Considerations for publishers moving forward

A strong push should still be made for robust AI education and guidelines by publishers (see Wiley’s admirable attempt, for example). Some have argued that, due to the limitations on policy enforcement, disclosure should be voluntary. Regardless of whether publishers believe the brunt of responsibility should lie with them or with the authors, no publisher wants to be in the position of retracting batches of articles regularly due to integrity issues. At the very least, publishers can help authors and reviewers understand the pitfalls that AI presents and ensure the replicability and reliability of the research. STM should be applauded for publishing an initial classification of AI use cases, but it is still unclear how these should be conveyed to authors, what use cases each journal allows with and without disclosure, and whether this classification can keep up with ever-changing models and tools.

There is one solution I encourage publishers not to adopt: investing in AI text detection tools. Not only are these tools notoriously unreliable and slow to adapt as AI models improve, but they also reinforce the idea that using AI to help with writing is forbidden, a position not taken by most publishers. If we believe that AI tools will help ESL scholars level the playing field for publication, why are we so obsessed with trying to detect their use? In commercial publishing, proposals have been made to differentiate between AI-generated and AI-assisted work, but it is very challenging to define where one starts and the other stops.

Publishers need to put serious thought into how, when, and in what way they want authors to declare their AI use; otherwise, we are likely to continue seeing minimal disclosures. And publishers need to state clearly whether or not AI use will impact the review of their papers. In a follow-up article, I will lay out my suggestions and framework for how publishers can be more accepting of AI use while not sacrificing research integrity.

Published by courtesy of the author and The Scholarly Kitchen.

High-Quality-Headshot-600x600

Avi Staiman (LinkedIn profile page) is the founder and CEO of Academic Language Experts, a company dedicated to empowering English as an Additional Language authors to elevate their research for publication and bring it to the world. Avi is a core member of CANGARU, where he represents EASE in creating legislation and policy for the responsible use of AI in research. He is also the co-host of the New Books Network ‘Scholarly Communication’ Podcast.


Dieser Artikel ist Teil des Channels Digital Publishing Technologien, der sich mit Content-Strategien und Prozessen beschäftigt. Der Channel wird gesponsert von Fabasoft Xpublisher.

Mehr Infos zu Fabasoft Xpublisher