Ron Aardening Ron Aardening
June 13th, 2025

Why We Need AI Standards in Scholarly Publishing: Reflections on the NISO Workshop

scholarly communication

Artificial intelligence (AI) is no longer a novelty in scholarly publishing. It's embedded in everything from peer review to plagiarism detection and now, increasingly, in the creation, summarisation, and dissemination of research itself. The problem? The rules of engagement are still being written, and the pace of change is dizzying.


Recently, the National Information Standards Organisation (NISO) convened a workshop with a straightforward yet urgent goal: to chart a path toward practical, community-driven standards for AI in scholarly communication. Todd Carpenter's report on the event is both a wake-up call and a roadmap. It's clear that if we want to harness AI's benefits without undermining trust in research, we need to act together and soon.

Image by Ron Aardening - A high-resolution, landscape-oriented image in the style of contemporary editorial and travel photography of a modern library interior. In the foreground, a researcher sits at a workstation, their screen casting a subtle glow. The mood is contemplative, blending tradition and technology, with a cinematic use of shadow and a rich, timeless colour palette.

What's the Gist?

The workshop brought together publishers, librarians, technologists, and researchers. The consensus? We need standards for how AI tools interact with scholarly content, how outputs are attributed, and how transparency is maintained. Seven priority areas emerged, but three stood out to me:

  • Usage tracking and auditing: We track human readers; why not AI bots? If AI models are ingesting research at scale, publishers and libraries need to know what's being used and how. This is about more than metrics—it's about understanding the value chain when machines, not just humans, are the primary audience.
  • Attribution and provenance: When AI generates text or insights, how do we ensure proper credit for original authors? The risk of "AI plagiarism" is real, and without clear provenance, the integrity of the scholarly record is at stake.
  • Transparency and disclosure: Researchers and publishers need to know what's inside the black box. Model cards, disclosure statements, and standardised metadata could help, but only if adopted widely.

Why Is This Relevant?

For those of us working in scholarly communication, this isn't an abstract debate. It's about the future of trust in research. If we don't set standards now, we risk a fragmented ecosystem where every publisher, platform, and researcher follows different rules. That's a recipe for confusion, not progress.

I've seen firsthand at Maastricht University Library how AI bots are already crawling open-access repositories. Sometimes, this is benign, but at other times, it strains infrastructure or raises questions about fair use and compensation. Without standards, we're left guessing—and reacting rather than shaping the future.

Connecting to Current Developments

This isn't happening in a vacuum. COUNTER is working on guidance for AI traffic. The STM Association has a draft framework for the use of AI in manuscripts. COAR is launching a task force on AI and repositories. But these are early days, and coordination is sorely needed.


The NISO workshop's call for community input is crucial. If you're a researcher, publisher, librarian, or technologist, your perspective matters. The standards we set now will shape not only workflows but also the very fabric of scholarly communication for years to come.

How Can We Benefit?

Clear standards would make life easier for everyone:

  • Authors would know when and how to disclose the use of AI.
  • Publishers could automate compliance and attribution.
  • Libraries could better manage access and infrastructure.
  • Readers could trust the provenance of what they read.

And let's not forget: well-designed standards can foster innovation, not stifle it. They create a level playing field and a shared language for collaboration.

Who Should Care?

Anyone who cares about research integrity, open access, or the future of knowledge should be paying attention. AI is here to stay, and the choices we make now will echo for decades.

Key Takeaways

  • AI is transforming scholarly publishing, but without standards, we risk confusion and loss of trust.
  • NISO's workshop has identified several urgent priorities, including tracking, attribution, transparency, and others.
  • Community input is vital—now is the time to get involved.
  • The proper standards can support both innovation and integrity.


Todd Carpenter's detailed report on the NISO workshop is essential reading for anyone interested in the intersection of artificial intelligence (AI) and scholarly communication. It offers both a reality check and a call to action—read it to understand what's at stake and how you can contribute.