Why it’s important to know who did what in a research paper

11 July 2025

Kim Eggleton is Head of Peer Review and Research Integrity, and Daniel Keirs is Head of Journal Strategy and Performance at IOP Publishing.

In today’s world, scientific breakthroughs are rarely the product of a single mind. Instead, they emerge from the collective efforts of diverse teams working together in labs, research centres, and virtual collaborations across the globe. A century ago, when the Institute of Physics published its first papers, scientific manuscripts often featured just one or two authors. Now, it’s common to see thirty names on a single article, each contributing a unique expertise to solve complex problems. This shift reflects not only the increased volume of research but also the intricate nature of modern science and the broad range of skills it demands.

For instance, some team members contribute to the conceptualisation of the work. Others ensure smooth project administration, curate data, or build the software needed to run simulations. Some may lead the investigation itself, while others translate the results into compelling writing.

Yet, despite this diversity in contribution, traditional author lists on publications tell us very little. One major issue is ‘hyperauthorship’, a growing trend in the physical sciences where papers have an exceptionally large number of authors. Information scientist Blaise Cronin coined the term in 2001 to describe papers with 100 or more authors[1]. A well-known example is the discovery of the Higgs boson, which involved papers with thousands of contributors from CERN and beyond. While this is an example of a legitimate large collaboration, there are instances where authorship has been obtained improperly or where the definition of authorship has been stretched.

Another problem with traditional author lists is that early career researchers often go unnoticed if they aren’t listed as the first author. Additionally, the way names are ordered varies wildly between disciplines, making it hard to understand each person’s role or level of contribution. This inconsistency can obscure the recognition of crucial work done, especially those at the beginning of their careers.

As research becomes more specialised, the need for clarity around who does what has grown increasingly urgent.

Historically, contribution statements have often been buried in acknowledgements or written in free-form that lacks standardisation. This lack of consistency is also a barrier to improving transparency and building trust in science.

 CRediT: the Contributor Roles Taxonomy

CRediT is a community-developed, NISO standardised system, designed to clearly and consistently capture the roles played by contributors to research outputs. It defines 14 distinct roles — from conceptualisation and methodology to software development and writing — allowing authors to identify and communicate their specific input.

IOPP has recently adopted CRediT across all its proprietary journals. Authors will now be able to declare their roles in a structured, discoverable, and transparent way. For researchers, this means greater recognition and a clearer path to demonstrating their professional impact. For institutions and funders, it offers a more holistic and nuanced picture of research contribution—far beyond the outdated metric of first or last author. And for science as a whole, it’s a step toward greater integrity, accountability, and trust.

The days of the lone researcher in their shed may be behind us, but the passion for discovery remains. What’s changed is that now, we can finally give credit where credit is due, and authors can take responsibility for their specific contributions.


[1] Cronin, B. J. Assoc. Inf. Sci. Technol. 52, 558-569 (2001)